All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. Is it a fresh installation or an upgrade? 2. You have the immediate debugging steps on screen.
Could be. I didn't copy-paste it but written here by hand so there might have been a typo.
Thank you for your reply. I checked the internal log, but there were no errors related to ThousandEyes Dynamic tests. Therefore, I checked whether the App is configured to retrieve Dynamic test dat... See more...
Thank you for your reply. I checked the internal log, but there were no errors related to ThousandEyes Dynamic tests. Therefore, I checked whether the App is configured to retrieve Dynamic test data in the first place. Upon reviewing the thousandeyes_constant.py file, I found that ENDPOINT_TEST_TYPES = ["agent-to-server", "http-server"] does not include “Dynamic.” This indicates that the current specification does not support retrieving Dynamic test data.
I have a feeling that using tokens in the count part of the XML config was broken at some point. It used to work, then it stopped working, but now I tested again, it does work - what version are you ... See more...
I have a feeling that using tokens in the count part of the XML config was broken at some point. It used to work, then it stopped working, but now I tested again, it does work - what version are you on?  
Hello everyone, I use a Dell Windows laptop, and after downloading the Splunk enterprise 9.4.3 app for Windows, I'm unable to install it because of an error prompt. Please, can I get a step by step a... See more...
Hello everyone, I use a Dell Windows laptop, and after downloading the Splunk enterprise 9.4.3 app for Windows, I'm unable to install it because of an error prompt. Please, can I get a step by step approach on fixing this?  
This did work, but had to remove the s on optimizations and presto. Thank you.
@oawill  mltk_ai_commander_dataset.csv is a training dataset that ships with MLTK 5.6.0 and higher specifically for hands-on exercises with the LLM integrations feature. This dataset contains exampl... See more...
@oawill  mltk_ai_commander_dataset.csv is a training dataset that ships with MLTK 5.6.0 and higher specifically for hands-on exercises with the LLM integrations feature. This dataset contains example data used to train models to distinguish between malicious and benign PowerShell scripts. Since this is an example training dataset provided by Splunk for educational/demonstration purposes with the MLTK, the specific authorship and attribution details may not be extensively documented in the traditional dataset credits format. The dataset is designed as sample data for users to practice with the AI Commander functionality. If this Helps, Please Upvote.
Hi @bellb  There isnt a published PDF, however if you go to https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/release-notes/9.4/whats-new/welcome-to-splunk-enterprise-9.4 and cl... See more...
Hi @bellb  There isnt a published PDF, however if you go to https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/release-notes/9.4/whats-new/welcome-to-splunk-enterprise-9.4 and click on the Print button, you should hopefully be able to save it as / print to PDF.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I am trying to find the Dataset Credits for mltk_ai_commander.csv, which comes with MLTK 5.6.0 and higher, according to the user guide. I checked the MLTK Dataset Credits page, but it looks like it h... See more...
I am trying to find the Dataset Credits for mltk_ai_commander.csv, which comes with MLTK 5.6.0 and higher, according to the user guide. I checked the MLTK Dataset Credits page, but it looks like it hasn't been updated for this version yet. Does anyone know if there is somewhere else I can find authorship or attribution information?
Can I get a PDF of the Splunk Enterprise 9.4.3 Release Notes?
OK. It seems that even with "where" Splunk optimizes this search and it turns into index=whatever source=CASE("stderr") "stderr" Which obviously again searches for the source as indexed field only.... See more...
OK. It seems that even with "where" Splunk optimizes this search and it turns into index=whatever source=CASE("stderr") "stderr" Which obviously again searches for the source as indexed field only. (same goes  You can make it work if you disable optimizations index=whatever stderr | noop search_optimization=false | where source="stderr"  
This is happening because I`ve got this set up, and it looks like the only way to refresh is to enter edit mode and exit without saving. Any ideas ? cc @livehybrid  <option name="count">$row_count_t... See more...
This is happening because I`ve got this set up, and it looks like the only way to refresh is to enter edit mode and exit without saving. Any ideas ? cc @livehybrid  <option name="count">$row_count_tok$</option>
That is intriguing because I was pretty sure it would work. I tried to recreate your case locally with makeresults | collect and it indeed doesn't find it with where. I'll keep digging.
For some additional context — the dashboard actually works unless I switch from portrait to landscape or the other way round. When that happens, the only way to resolve is to enter edit mode and then... See more...
For some additional context — the dashboard actually works unless I switch from portrait to landscape or the other way round. When that happens, the only way to resolve is to enter edit mode and then exit without making any changes. Simply refreshing the page doesn't work as expected, although the token does update automatically (I`ve got the token in a title so I can view it`s values).
Ok, I'm pretty sure my theory is right. Look at this stack trace: at org.apache.http.impl.BHttpConnectionBase.close(BHttpConnectionBase.java:317) at org.apache.http.impl.conn.LoggingManagedHttpCl... See more...
Ok, I'm pretty sure my theory is right. Look at this stack trace: at org.apache.http.impl.BHttpConnectionBase.close(BHttpConnectionBase.java:317) at org.apache.http.impl.conn.LoggingManagedHttpClientConnection.close(LoggingManagedHttpClientConnection.java:81) at org.apache.http.impl.execchain.ConnectionHolder.releaseConnection(ConnectionHolder.java:103) at org.apache.http.impl.execchain.ConnectionHolder.close(ConnectionHolder.java:156) at org.apache.http.impl.execchain.HttpResponseProxy.close(HttpResponseProxy.java:62) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportChannel.close(ServerSentEventsTransport.java:385) at com.signalfx.signalflow.client.Computation.close(Computation.java:168) If you use a debugger to print the identity hashcode of the inBuffer variable that is being cleared on that line, you'll see that it is the same object that is later returning -1 bytes read at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:250), which causes the MalformedChunkCodingException. This is a bug in your SDK. Changing TransportChannel's close() method to be more like this seems to fix the problem: public void close() { super.close(); this.streamParser.close(); try { this.response.close(); } catch (IOException ex) { log.error("failed to close response", ex); } try { this.connection.close(); } catch (IOException ex) { log.error("failed to close connection", ex); } } In that case, the wire logger output shows the trailing chunk: 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "event: control-message[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: {[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: "event" : "END_OF_CHANNEL",[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: "timestampMs" : 1751387752387[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "data: }[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\r][\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "0[\r][\n]" 2025-07-01 11:35:52.923 CDT | FINE | org.apache.http.wire | http-outgoing-2 << "[\r][\n]" thanks
Yes, the $row_count_tok$ is being set accordingly (24 or 40, depending on the screen orientation).
Hi @tomapatan  Are you able to visualise what the row_count_tok token is set to by your JS? It sounds like the queries do not see it as being set.  Its worth adding the token in a title or html som... See more...
Hi @tomapatan  Are you able to visualise what the row_count_tok token is set to by your JS? It sounds like the queries do not see it as being set.  Its worth adding the token in a title or html somewhere so you can confirm.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If it works, it works. It's worth noting though that it's a rather "heavy" way of doing it. Spath is a fairly intensive command and you're doing it over all your events. I suppose for a one-off a... See more...
If it works, it works. It's worth noting though that it's a rather "heavy" way of doing it. Spath is a fairly intensive command and you're doing it over all your events. I suppose for a one-off ad-hoc search it might be OK but if you do it often, you might want to optimize it a bit.
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 r... See more...
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 rows) I pass this token into my search to determine how many rows to return, and then paginate them like so: ...... | head $row_count_tok$ | streamstats count as Page | eval Page = case( $row_count_tok$=24 AND Page<=12, 0, $row_count_tok$=24 AND Page>12, 1, $row_count_tok$=40 AND Page<=20, 0, $row_count_tok$=40 AND Page>20, 1 ) | eval display = floor(tonumber(strftime(now(), "%S")) / 10) % 2 | where Page = display | fields - display The token and logic work (tested manually), but I get this message on page load indicating the token was not ready when the search ran: Search is waiting for input... How do I force the query to wait for the token to load ? Many thanks.
This is true, and I guess there is also a chance that the term "stderr" could exist in the log for a source=stdout log...!  I tend to use TERM because I find its sometimes the easiest way to improve... See more...
This is true, and I guess there is also a chance that the term "stderr" could exist in the log for a source=stdout log...!  I tend to use TERM because I find its sometimes the easiest way to improve search performance** and not enough people know of its existence. (only 1% of Splunk Cloud customers in 2020 according to Rich Morgan