All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok, I'm pretty sure my theory is right. Look at this stack trace: at org.apache.http.impl.BHttpConnectionBase.close(BHttpConnectionBase.java:317) at org.apache.http.impl.conn.LoggingManagedHttpCl... See more...
Ok, I'm pretty sure my theory is right. Look at this stack trace: at org.apache.http.impl.BHttpConnectionBase.close(BHttpConnectionBase.java:317) at org.apache.http.impl.conn.LoggingManagedHttpClientConnection.close(LoggingManagedHttpClientConnection.java:81) at org.apache.http.impl.execchain.ConnectionHolder.releaseConnection(ConnectionHolder.java:103) at org.apache.http.impl.execchain.ConnectionHolder.close(ConnectionHolder.java:156) at org.apache.http.impl.execchain.HttpResponseProxy.close(HttpResponseProxy.java:62) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportChannel.close(ServerSentEventsTransport.java:385) at com.signalfx.signalflow.client.Computation.close(Computation.java:168) If you use a debugger to print the identity hashcode of the inBuffer variable that is being cleared on that line, you'll see that it is the same object that is later returning -1 bytes read at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:250), which causes the MalformedChunkCodingException. This is a bug in your SDK. Changing TransportChannel's close() method to be more like this seems to fix the problem: public void close() { super.close(); this.streamParser.close(); try { this.response.close(); } catch (IOException ex) { log.error("failed to close response", ex); } try { this.connection.close(); } catch (IOException ex) { log.error("failed to close connection", ex); } } In that case, the wire logger output shows the trailing chunk: 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "event: control-message[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: {[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: "event" : "END_OF_CHANNEL",[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: "timestampMs" : 1751387752387[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: }[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\r][\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "0[\r][\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\r][\n]" thanks
Yes, the $row_count_tok$ is being set accordingly (24 or 40, depending on the screen orientation).
Hi @tomapatan  Are you able to visualise what the row_count_tok token is set to by your JS? It sounds like the queries do not see it as being set.  Its worth adding the token in a title or html som... See more...
Hi @tomapatan  Are you able to visualise what the row_count_tok token is set to by your JS? It sounds like the queries do not see it as being set.  Its worth adding the token in a title or html somewhere so you can confirm.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If it works, it works. It's worth noting though that it's a rather "heavy" way of doing it. Spath is a fairly intensive command and you're doing it over all your events. I suppose for a one-off a... See more...
If it works, it works. It's worth noting though that it's a rather "heavy" way of doing it. Spath is a fairly intensive command and you're doing it over all your events. I suppose for a one-off ad-hoc search it might be OK but if you do it often, you might want to optimize it a bit.
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 r... See more...
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 rows) I pass this token into my search to determine how many rows to return, and then paginate them like so: ...... | head $row_count_tok$ | streamstats count as Page | eval Page = case( $row_count_tok$=24 AND Page<=12, 0, $row_count_tok$=24 AND Page>12, 1, $row_count_tok$=40 AND Page<=20, 0, $row_count_tok$=40 AND Page>20, 1 ) | eval display = floor(tonumber(strftime(now(), "%S")) / 10) % 2 | where Page = display | fields - display The token and logic work (tested manually), but I get this message on page load indicating the token was not ready when the search ran: Search is waiting for input... How do I force the query to wait for the token to load ? Many thanks.
This is true, and I guess there is also a chance that the term "stderr" could exist in the log for a source=stdout log...!  I tend to use TERM because I find its sometimes the easiest way to improve... See more...
This is true, and I guess there is also a chance that the term "stderr" could exist in the log for a source=stdout log...!  I tend to use TERM because I find its sometimes the easiest way to improve search performance** and not enough people know of its existence. (only 1% of Splunk Cloud customers in 2020 according to Rich Morgan 
Thanks for the input! I have Assets/Identities populated, I suspect my issue is CIM.  Only issue is I'm not clear exactly what field is missing. 
I did get results using spath, not sure if that is the best way but does seem to remove all other sources from the source field. index=dnrc_docker sourcetype=dnrc:docker | spath source | search sour... See more...
I did get results using spath, not sure if that is the best way but does seem to remove all other sources from the source field. index=dnrc_docker sourcetype=dnrc:docker | spath source | search source="stderr"
Using the where command did not result in any results, not sure why.  
Hello Tejas Thanks for your answer. I have setup Splunk with the keys provided by our Docusign admin. Looks like he doesn't know how to generate them. I had already reviewed the documentation. Fro... See more...
Hello Tejas Thanks for your answer. I have setup Splunk with the keys provided by our Docusign admin. Looks like he doesn't know how to generate them. I had already reviewed the documentation. From my point of view as Splunk admin, this is not clear what has to be done on the application side. Thanks Lionel  
Let me add a line of thought I tried and failed to resolve - looking at the deployed Splunk host files, I saw that the number I am trying to modify is being displayed from a variable called scanCount... See more...
Let me add a line of thought I tried and failed to resolve - looking at the deployed Splunk host files, I saw that the number I am trying to modify is being displayed from a variable called scanCount Also, in the search_results_info file (info.csv) of the job I saw a field called scan_count that was set to 0 through out the entire search process (which strengthened my suspicions that this is related), I tried to edit the file mid run but encountered issues and it didn't affect the UI when I succeeded. I also attempted to update it in a more generic way (instead of a one time bypass) - through the code of the application: self.search_results_info["scan_count"] = 100000 But this results in an Exception of: 'ObjectView' object does not support item assignment. that means it's a readonly variable that can't be modified from the code I was wondering if there's a different way to update it by a non-direct access Thanks!
I think the easiest way to verify whether that field is indexed (there might be some additional index-time extraction, apart from simple indexed-extraction configuration for the whole event; yes, I k... See more...
I think the easiest way to verify whether that field is indexed (there might be some additional index-time extraction, apart from simple indexed-extraction configuration for the whole event; yes, I know it's confusing ;-)) is to try to search for index=your_windows_index EventID::4624 The important thing is that you're not looking for  EventID=4624 but for  EventID::4624 If you get any results that will mean that this field is indeed indexed and you have to search where it's extracted in index time.
First of all, thanks for replying @livehybrid  I saw in the documentation that the default value of run_in_preview is true, I tried explicitly adding it to the commands.conf both as true and false (... See more...
First of all, thanks for replying @livehybrid  I saw in the documentation that the default value of run_in_preview is true, I tried explicitly adding it to the commands.conf both as true and false (separate tests) - and it doesn't affect my experience.. Two questions regarding that, since I don't completely understand the preview mode: 1. What did you expect it to change? The 0 number to increase in correlation with the fetched results? Or would I have been able to somehow set the "total" count? 2. Could you please expand what is exactly the preview mode? I can't find any difference in the behavior when running with false setting Other than that - if you might have any other suggestions on how can I resolve the original issue I would highly appreciate some more thoughts and things to try Thanks!
Hi @malix_la_harpe  Many thanks for this comprehensive answer. I've been testing the query and it gives promising results, however I have one issue and I hope you will be able to help me. In the r... See more...
Hi @malix_la_harpe  Many thanks for this comprehensive answer. I've been testing the query and it gives promising results, however I have one issue and I hope you will be able to help me. In the results table there shouldn't be example2 row with FAILURE result as this is a begin of login process successfully completed in example3 row. In other words, the row example2 should be removed from the table. I've tried to adjust the query but unfortunately wasn't able and I hope you will be able to help me. I hope I explained the problem clearly - if not, please let me know.    
Hi @ND1  There is a tutorial at https://help.splunk.com/en/splunk-soar/soar-cloud/develop-apps/build-playbooks/use-the-playbook-editor-to-create-and-view-playbooks-to-automate-analyst-workflows/add-... See more...
Hi @ND1  There is a tutorial at https://help.splunk.com/en/splunk-soar/soar-cloud/develop-apps/build-playbooks/use-the-playbook-editor-to-create-and-view-playbooks-to-automate-analyst-workflows/add-custom-code-to-your-splunk-soar-cloud-playbook-with-a-custom-function which might be useful for you as a starting guide. There is also a sample function here: https://gist.github.com/gf13579/e7cd4132c7c61c5cabec4ce953f5a455 and a bunch of custom function examples at https://github.com/phantomcyber/playbooks/tree/7.0/custom_functions which might also help! Good luck  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Can you tell me where to check if I have indexed extractions enabled? I don't know if this is relevant, but I do have the EventID field normal. So the EventCode maybe 4624 4624 but the EventID is ... See more...
Can you tell me where to check if I have indexed extractions enabled? I don't know if this is relevant, but I do have the EventID field normal. So the EventCode maybe 4624 4624 but the EventID is just 4624. And like I mentioned in the comments below, this only happen to my "XmlWinEventLog:Security" and "XmlWinEventLog:DNS Server",  does not affect other XmlWinEventLog like Application and System. Which from my perspective, is really strange!
Hi @alorw  I believe the number of events displayed in this scenario is driven from the "preview" - Have you got run_in_preview = true in your commands.conf ?  Did this answer help you? If so, pl... See more...
Hi @alorw  I believe the number of events displayed in this scenario is driven from the "preview" - Have you got run_in_preview = true in your commands.conf ?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I tried removing inputs.conf from the TA because I only wanted it for the props/transforms on my search head cluster.  Still got the error.  How?
Hello family, please does anyone knows or has sources that explains how to use or built custom functions in Splunk SOAR?
Thanks for the update.  can you pl guide us how to install from official (trusted source) Appdynamics NPM to avoid these malicious dependent packages?