All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kn450, For a basic setup with either a standalone Splunk/Stream instance or separate Splunk and Stream instances, the steps at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStream... See more...
Hi @kn450, For a basic setup with either a standalone Splunk/Stream instance or separate Splunk and Stream instances, the steps at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata result in a working configuration. In my test environment using a standalone instance on RHEL, I made only the following changes to $SPLUNK_HOME/etc/apps/Splunk_TA_stream/local/streamfwd.conf to enable both capture and NetFlow/IPFIX: [streamfwd] streamfwdcapture.0.interfaceRegex = ens.+ netflowReceiver.0.port = 9996 netflowReceiver.0.decoder = netflow I then enabled the netflow metadata stream in the Splunk Stream app. Using SolarWinds NetFlow Generator <https://www.solarwinds.com/free-tools/flow-tool-bundle> (not an endorsement, but it's free), I sent sample IPFIX data to the standalone instance, which Stream successfully decoded: {"endtime":"2025-06-29T23:20:12Z","timestamp":"2025-06-29T23:20:12Z","bytes_in":0,"dest_ip":"192.168.1.25","dest_port":443,"dest_sysnum":0,"event_name":"netFlowData","exporter_ip":"192.168.1.158","exporter_time":"2025-Jun-29 23:20:12","flow_end_rel":0,"flow_start_rel":0,"input_snmpidx":8,"netflow_version":10,"nexthop_addr":"1.1.1.2","observation_domain_id":0,"output_snmpidx":5,"packets_in":0,"protoid":6,"seqnumber":23000,"src_ip":"192.168.1.132","src_port":15449,"src_sysnum":0,"tcp_flags":0,"tos":0} Custom NetFlow parsing is described at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/AutoinputNetflow. Can you confirm the default configuration works? If it does, we can dig into any customizations you need. If it doesn't, confirm your Stream instance is receiving correctly formatted IPFIX packets using tcpdump or another local capture tool.
If you could figure out working version, then you could try to download it with this https://github.com/ryanadler/downloadSplunk  
Yes, you could define several ports if needed, by adding a new receiver into those indexers by app via CM. I’m not sure if I understood correctly that you flip your receiver port to some invalid valu... See more...
Yes, you could define several ports if needed, by adding a new receiver into those indexers by app via CM. I’m not sure if I understood correctly that you flip your receiver port to some invalid value or something like that? Basically you could have separate port reserved for internal nodes which is blocked by firewall from normal traffic from UFs etc. This is allowed only from SH etc. Another receiver port is for all other UFs and IHFs (intermediate heavy forwarders). Then when you need to block real incoming indexing traffic, just disable that port. Then SH stop using this, as it gets information that this is closed by indexer discovery! And continue to use that SH only port. But still I said that you should update your license to cover your real ingestion needs or remove unnecessary ingestion.
Hi, This is a bit of a guess—but maybe it will spark some ideas to try. I wonder if closing the computation inside of the loop is not giving the server enough time to send it’s final response. It mi... See more...
Hi, This is a bit of a guess—but maybe it will spark some ideas to try. I wonder if closing the computation inside of the loop is not giving the server enough time to send it’s final response. It might be worth trying introducing some delay before closing or maybe try using a “try/catch” approach when closing the computation.
Hi @Fenilleh    Is the issue resolved or still you are facing an issue? If issue still persists,please paste the error whatsoever you are getting in splunkd and mongod.  Also, I am attaching one... See more...
Hi @Fenilleh    Is the issue resolved or still you are facing an issue? If issue still persists,please paste the error whatsoever you are getting in splunkd and mongod.  Also, I am attaching one KB article, have a look if that is relevant.  https://splunk.my.site.com/customer/s/article/KV-Store-Backup-Fails
Contact Splunk Support.  They may have a link to a version that works for you.
@ReiGjuzi  Finding a legacy Splunk Universal Forwarder MSI for Windows 7 SP1 (x64) is tricky since Microsoft and Splunk no longer support Windows 7, and official download pages prioritize newer vers... See more...
@ReiGjuzi  Finding a legacy Splunk Universal Forwarder MSI for Windows 7 SP1 (x64) is tricky since Microsoft and Splunk no longer support Windows 7, and official download pages prioritize newer versions for supported OSes like Windows 10 and 11.   If you can’t find the MSI on Splunk’s official site, avoid unofficial mirrors due to security risks.  
@simonsa I see that you're uploading a .gz file. Please extract it and upload the original, uncompressed file.
@simonsa  If the DNSlog files are large, they might exceed the upload limits or cause the browser to time out. Try uploading a smaller portion of the log file to see if it succeeds.
hi @gcusello  First of all, thank you for your reply. There's something here I'm curious about. If the .conf file I have added contains the correct content, if I want to upload the same file with a ... See more...
hi @gcusello  First of all, thank you for your reply. There's something here I'm curious about. If the .conf file I have added contains the correct content, if I want to upload the same file with a different name, is the result in the review section correct or should I see the search section?
Hi @tanjiro_rengo , are you inputting using manual guided Data Input or an input in conf file? is you can use manual Data Input you can do this withou any issue. If you need to use inputs.conf, yo... See more...
Hi @tanjiro_rengo , are you inputting using manual guided Data Input or an input in conf file? is you can use manual Data Input you can do this withou any issue. If you need to use inputs.conf, you must remember to rename the file and use crcSalt=<SOURCE> in inputs.conf otherwise Splunk doesn't read twice a file. About deleting, you can use the delete command in the search dashboard, but you must before assign to your user the "can_delete" role otherwise, also an admin, cannot delete any log; remember at the end of this action to remove this role for your user (it's safer!). Obviously, this is a logical deletion, not a physical deletion, for the physical deletion you can only use the splunk clena eventdata -index <your_index> command by CLI, but in this way, you delete all the data in an index. not only the last file. Ciao. Giuseppe
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For exam... See more...
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For example, a log that starts with line D and ends with line F. I created a .conf file for this and restarted splunk, but does it also affect the existing logs, do I need to throw these logs again, so how can I delete the existing one and throw it again. What is your view of the whole event?  
I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-... See more...
I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-analyzing-dhcp-logs-using-splunk-siem.md) and for some reason whenever I try to upload the log files in "add data" it keeps timing out. I was initially able to do the DNS logs but now I can't do anything. Is it because im on a free trial? Can someone else try and let me know if you are having the same problem?   
Hi @isoutamo , Thanks for the answer. Could you please provide more clarity on this part? So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only... See more...
Hi @isoutamo , Thanks for the answer. Could you please provide more clarity on this part? So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only from SH side by FW. Then your SH (and LM in SH) can continue send their internals into indexers and everything should works. And same time all UFs with static indexer information cannot send events as the receiving port has changed. If you have any real dat inputs on SH then you should set up HF and move those inputs there. Are there multiple receiving ports for an indexer? And, if so, how can I do that? Thanks, Pravin
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Univers... See more...
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Universal Forwarder page lists 9.x compatible with windows 10, 11.  I know windows 7 is ancient at this point but it would really help me if there's an official archive, mirror, or authenticated link where I can still pull that legacy MSI? Any pointer or working link would be greatly appreciated. Thanks!
Hi @NK, @gcusello provides a hint about aggregations and split-by fields used in trellis mode. Generally, any output field in the by-clause of an aggregation command like chart, stats, or timechart ... See more...
Hi @NK, @gcusello provides a hint about aggregations and split-by fields used in trellis mode. Generally, any output field in the by-clause of an aggregation command like chart, stats, or timechart will be available as a split-by field in trellis, and other fields will be treated as aggregations. You can use timechart as a helper to fill in empty hour and status values: index=main sourcetype=access_combined | timechart fixedrange=true limit=0 span=1h useother=false count by status | untable _time status count | rename _time as hour | stats sum(count) as count by hour status | fieldformat hour=strftime(hour, "%F %T") When fixedrange=false, timechart will limit its output to the lower and upper _time bounds of the search results. When fixedrange=true, timechart will add empty buckets using the lower and upper _time bounds of your search time range, e.g., earliest=-24h@h latest=@h. When no results are found for an hour of the day, an empty pie chart will be displayed, and missing or zero status values relative to the search results will be aggregated with other status values under the minimum size threshold using the label "other":  
Hi @NK , you cannot use timechart but eval and stats, then you can configure something like this: <dashboard version="1.1" theme="light"> <label>Test trellis</label> <row> <panel> <ch... See more...
Hi @NK , you cannot use timechart but eval and stats, then you can configure something like this: <dashboard version="1.1" theme="light"> <label>Test trellis</label> <row> <panel> <chart> <search> <query> index=_internal | eval hour=strftime(_time,"%H") | stats count BY hour sourcetype </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">pie</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trellis.splitBy">hour</option> </chart> </panel> </row> </dashboard> Ciao. Giuseppe
Splunk sourcetype=access_combined.   What would the splunk query look like to get an hourly trellis of piecharts by http_status?
Usually it's better to create a new question a did needed then link to old thread. In that way you will get more answers your issue! When you have issue with DS then you should add this configurati... See more...
Usually it's better to create a new question a did needed then link to old thread. In that way you will get more answers your issue! When you have issue with DS then you should add this configuration into outputs.conf not distributed search.conf. If this is not helping you, then create a new question with more retailed information about your current situation and your configurations!
Exactly that way. And Splunk has changed requirements to have higher version in Indexers with version 9.x. (not sure which minor x was). Now you can officially have newer UF version than receiving HF/... See more...
Exactly that way. And Splunk has changed requirements to have higher version in Indexers with version 9.x. (not sure which minor x was). Now you can officially have newer UF version than receiving HF/IDX version.