All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@simonsa I see that you're uploading a .gz file. Please extract it and upload the original, uncompressed file.
@simonsa  If the DNSlog files are large, they might exceed the upload limits or cause the browser to time out. Try uploading a smaller portion of the log file to see if it succeeds.
hi @gcusello  First of all, thank you for your reply. There's something here I'm curious about. If the .conf file I have added contains the correct content, if I want to upload the same file with a ... See more...
hi @gcusello  First of all, thank you for your reply. There's something here I'm curious about. If the .conf file I have added contains the correct content, if I want to upload the same file with a different name, is the result in the review section correct or should I see the search section?
Hi @tanjiro_rengo , are you inputting using manual guided Data Input or an input in conf file? is you can use manual Data Input you can do this withou any issue. If you need to use inputs.conf, yo... See more...
Hi @tanjiro_rengo , are you inputting using manual guided Data Input or an input in conf file? is you can use manual Data Input you can do this withou any issue. If you need to use inputs.conf, you must remember to rename the file and use crcSalt=<SOURCE> in inputs.conf otherwise Splunk doesn't read twice a file. About deleting, you can use the delete command in the search dashboard, but you must before assign to your user the "can_delete" role otherwise, also an admin, cannot delete any log; remember at the end of this action to remove this role for your user (it's safer!). Obviously, this is a logical deletion, not a physical deletion, for the physical deletion you can only use the splunk clena eventdata -index <your_index> command by CLI, but in this way, you delete all the data in an index. not only the last file. Ciao. Giuseppe
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For exam... See more...
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For example, a log that starts with line D and ends with line F. I created a .conf file for this and restarted splunk, but does it also affect the existing logs, do I need to throw these logs again, so how can I delete the existing one and throw it again. What is your view of the whole event?  
I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-... See more...
I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-analyzing-dhcp-logs-using-splunk-siem.md) and for some reason whenever I try to upload the log files in "add data" it keeps timing out. I was initially able to do the DNS logs but now I can't do anything. Is it because im on a free trial? Can someone else try and let me know if you are having the same problem?   
Hi @isoutamo , Thanks for the answer. Could you please provide more clarity on this part? So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only... See more...
Hi @isoutamo , Thanks for the answer. Could you please provide more clarity on this part? So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only from SH side by FW. Then your SH (and LM in SH) can continue send their internals into indexers and everything should works. And same time all UFs with static indexer information cannot send events as the receiving port has changed. If you have any real dat inputs on SH then you should set up HF and move those inputs there. Are there multiple receiving ports for an indexer? And, if so, how can I do that? Thanks, Pravin
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Univers... See more...
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Universal Forwarder page lists 9.x compatible with windows 10, 11.  I know windows 7 is ancient at this point but it would really help me if there's an official archive, mirror, or authenticated link where I can still pull that legacy MSI? Any pointer or working link would be greatly appreciated. Thanks!
Hi @NK, @gcusello provides a hint about aggregations and split-by fields used in trellis mode. Generally, any output field in the by-clause of an aggregation command like chart, stats, or timechart ... See more...
Hi @NK, @gcusello provides a hint about aggregations and split-by fields used in trellis mode. Generally, any output field in the by-clause of an aggregation command like chart, stats, or timechart will be available as a split-by field in trellis, and other fields will be treated as aggregations. You can use timechart as a helper to fill in empty hour and status values: index=main sourcetype=access_combined | timechart fixedrange=true limit=0 span=1h useother=false count by status | untable _time status count | rename _time as hour | stats sum(count) as count by hour status | fieldformat hour=strftime(hour, "%F %T") When fixedrange=false, timechart will limit its output to the lower and upper _time bounds of the search results. When fixedrange=true, timechart will add empty buckets using the lower and upper _time bounds of your search time range, e.g., earliest=-24h@h latest=@h. When no results are found for an hour of the day, an empty pie chart will be displayed, and missing or zero status values relative to the search results will be aggregated with other status values under the minimum size threshold using the label "other":  
Hi @NK , you cannot use timechart but eval and stats, then you can configure something like this: <dashboard version="1.1" theme="light"> <label>Test trellis</label> <row> <panel> <ch... See more...
Hi @NK , you cannot use timechart but eval and stats, then you can configure something like this: <dashboard version="1.1" theme="light"> <label>Test trellis</label> <row> <panel> <chart> <search> <query> index=_internal | eval hour=strftime(_time,"%H") | stats count BY hour sourcetype </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">pie</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trellis.splitBy">hour</option> </chart> </panel> </row> </dashboard> Ciao. Giuseppe
Splunk sourcetype=access_combined.   What would the splunk query look like to get an hourly trellis of piecharts by http_status?
Usually it's better to create a new question a did needed then link to old thread. In that way you will get more answers your issue! When you have issue with DS then you should add this configurati... See more...
Usually it's better to create a new question a did needed then link to old thread. In that way you will get more answers your issue! When you have issue with DS then you should add this configuration into outputs.conf not distributed search.conf. If this is not helping you, then create a new question with more retailed information about your current situation and your configurations!
Exactly that way. And Splunk has changed requirements to have higher version in Indexers with version 9.x. (not sure which minor x was). Now you can officially have newer UF version than receiving HF/... See more...
Exactly that way. And Splunk has changed requirements to have higher version in Indexers with version 9.x. (not sure which minor x was). Now you can officially have newer UF version than receiving HF/IDX version.
Yes I'm aware that MC has these panels where you could see some statistics about license usage.  Unfortunately those are not aligned with current license policy If you have official enterprise th... See more...
Yes I'm aware that MC has these panels where you could see some statistics about license usage.  Unfortunately those are not aligned with current license policy If you have official enterprise then the license policy is 45/60 not 3/30. And if your license size is 100GB+ then it's nonblocking for searches. And independent of your license there is no blocking for ingesting side. So even you have done hard license breach the ingesting is still working, but you cannot make any searches except for internal indexes (figure out why you have done breach)! I'm expecting, that as you have LM in your SH, and you are sending your internal logs (including those which LM is needed) to your idx cluster, except when you have blocked your indexer discovery by wrong port information, you have hit by "missing connection to LM" instead of indexing internal logs. Anyhow you shouldn't flip any internal logs as those are not counted towards your license usage. Only real data which are sent to an other than _* indexes are counted as indexed data. Usually all that date is coming from UF/HF not your SH etc.  So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only from SH side by FW. Then your SH (and LM in SH) can continue send their internals into indexers and everything should works. And same time all UFs with static indexer information cannot send events as the receiving port has changed. If you have any real dat inputs on SH then you should set up HF and move those inputs there. Of course the real fix is buy enough big splunk license....
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfull... See more...
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfully, however, it seems like this is not begin maintained.  No new versions of the Add-on have been released to keep up with the changes in the helm chart. I was able to successfully update from the default image on this version (0.86.0) to latest (0.127.0) however, the EKS Add-on creates the config map that is mounted to the agents with some deprecated values that are no longer valid for the latest version of the image. Is there any intent on maintaining this EKS Add-on? or is the recommendation to migrate to the helm chart? (https://github.com/signalfx/splunk-otel-collector-chart)
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signal... See more...
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signalflow-client-java I create the transport and client, then go in a loop an execute the same query once per iteration with an updated start time each time.  I read all the messages in the iterator, though I ignore some types.  I close the computation at the end of each iteration. The query seems to work fine.  I get the data I expect. The stack trace looks like this: Jun 27, 2025 4:33:16 PM com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser close SEVERE: failed to close event stream org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:250) at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:183) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:210) at org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:312) at org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:142) at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228) at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172) at java.base/sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:377) at java.base/sun.nio.cs.StreamDecoder.close(StreamDecoder.java:205) at java.base/java.io.InputStreamReader.close(InputStreamReader.java:192) at java.base/java.io.BufferedReader.close(BufferedReader.java:525) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser.close(ServerSentEventsTransport.java:476) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportChannel.close(ServerSentEventsTransport.java:396) at com.signalfx.signalflow.client.Computation.close(Computation.java:168) my code here  Should I be doing something different? thanks
Thanks.  I'm thinking it might have been a time sync issue.  If I set the start time slightly in the past, like even 1 second, it works. What I've settled on is setting the start time on the minute ... See more...
Thanks.  I'm thinking it might have been a time sync issue.  If I set the start time slightly in the past, like even 1 second, it works. What I've settled on is setting the start time on the minute (no seconds) with the same value for the end time.  That seems to return the single record that I want. thanks
@kiran_panchavat  - this isnt strictly true, @ramiiitnzv What is the reason you're trying to use a dev license with ES? If you are a customer and want to try out using ES then you should speak to you... See more...
@kiran_panchavat  - this isnt strictly true, @ramiiitnzv What is the reason you're trying to use a dev license with ES? If you are a customer and want to try out using ES then you should speak to your sales account team within Splunk, if you dont know who this is then you can try going via https://www.splunk.com/en_us/talk-to-sales.html. If you are wanting to build apps that integrate with ES then Dev license is probably appropriate, but as others have said, you dont automatically get access the ES app within Splunkbase as this is based on entitlements. Ultimately I think if you need access to ES then its the sales team who can grant access, if you explain to them the reasoning they should be able to find a resolution for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
I know this is an old forum but I'm having this same issue. My distributedsearch.conf was empty. Shoud I add the values mentioned in that scenario?     
Here is an enhanced version of the dashboard which performs the actions you described (more or less). <form version="1.1" theme="light"> <label>Token-driven repetition save</label> <row> <pa... See more...
Here is an enhanced version of the dashboard which performs the actions you described (more or less). <form version="1.1" theme="light"> <label>Token-driven repetition save</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="field value_1 value_2" | stats count as counter</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 1"> <eval token="current">if($result.counter$ &gt; 0,$result.counter$,null())</eval> <set token="trace"></set> </condition> <condition> <set token="trace"></set> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <search> <query>| makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test_2.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+field+"\" | outputlookup append=t test_2.csv")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <table> <title>$current$</title> <search> <query>| makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test_2.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+field+"\" | outputlookup append=t test_2.csv") | eval counter=$current$ | tail $current$ | reverse</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 1"> <set token="spl">$result.spl$</set> <eval token="current">if($result.counter$ &gt; 1,$result.counter$-1,null())</eval> </condition> <condition> <eval token="spl">if($result.counter$ &gt; 0,$result.spl$,null())</eval> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <search> <query>$spl$</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <unset token="spl"></unset> </done> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>