All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR... See more...
I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR pid=1248398 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=mscs_azure_event_hub.py:run:925 | datainput="Azure_Event_hub" start_time=1752027388 | message="Error occurred while connecting to eventhub: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known Error condition: ErrorCondition.ClientError Error Description: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known The credentials should not be an issue, as I am using the same credentials in FortiSIEM and successfully get the data from event hub. Could anyone help identify the cause of the issue and suggest how to resolve it?  
You could also look at ingest actions https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/DataIngest which gives you a slightly easier way to achieve the same thing - at least it can be done in... See more...
You could also look at ingest actions https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/DataIngest which gives you a slightly easier way to achieve the same thing - at least it can be done in the UI, so you can get an interactive way of seeing the results of your configuration.  
EP will not do that cross event analysis, at least not in its current form, but I would imagine that aggregations, and hence the ability to handle event relationships, is something that will come. Y... See more...
EP will not do that cross event analysis, at least not in its current form, but I would imagine that aggregations, and hence the ability to handle event relationships, is something that will come. Your comment 'a lot of unnecessary audit logs from a little script' makes me wonder if your little script could be pruned to be even smaller 
Is sourcetype ending up a multivalue field or does it contain odd characters? index=ilo | head 1 | eval st_count=mvcount(sourcetype), st_len=len(sourcetype) | eval tmp_sourcetype=":".sourcetype.":" ... See more...
Is sourcetype ending up a multivalue field or does it contain odd characters? index=ilo | head 1 | eval st_count=mvcount(sourcetype), st_len=len(sourcetype) | eval tmp_sourcetype=":".sourcetype.":" | table sourcetype tmp_sourcetype st_count st_len  
I'm not sure about dashboard studio, as I prefer XML, but in XML you can disable these options below the panels. https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXM... See more...
I'm not sure about dashboard studio, as I prefer XML, but in XML you can disable these options below the panels. https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#Shared_options See the link.exportResults.visible above, also hideExport  
It looks like I closed this thread early because I've hit an issue. I decided to go with the lookup table afterall. The query I marked as correct, does work, but it creates duplicate hosts which thro... See more...
It looks like I closed this thread early because I've hit an issue. I decided to go with the lookup table afterall. The query I marked as correct, does work, but it creates duplicate hosts which throws off the results. At first I thought it was just because the host name in Splunk had different casing than the MemberServers.csv file. I tweaked the query to lower the host names (all the names in MemberServers.csv are lower case) and removed the "| where total=0" line. That showed me there are 2 hosts for every host returned by the Splunk query. There is one from the original query and the one appended by the .csv file. For some reason the stats sum(count) command doesn't see them as identical hosts but two different ones even though their names are exactly the same (including case). This is the query which tells me there are now duplicates, one with count 0 (presumably added by the append command) and one a count greater than 0 (presumably added by the Splunk query). index=sw tag=MemberServers sourcetype="windows PFirewall Log" | eval host=lower(host) | stats count BY sourcetype host | append [ | inputlookup MemberServers.csv | eval count=0 | fields sourcetype host count] | stats sum(count) AS total BY sourcetype host I tried replacing append with a join command but I ran into problems with that too. Any help would be appreciated.
@livehybrid  for your comment"Without the original data its a little hard to say" Will this work? index="xxxx"   field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by t... See more...
@livehybrid  for your comment"Without the original data its a little hard to say" Will this work? index="xxxx"   field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | spath output=yyyId path=xxxId input=_raw | where isnotnull(yyyId) ANDyyyId!="" | bin _time span=5m AS hour_bucket | stats latest(_time) as last_activity_in_hour, count by hour_bucket, yyyId | stats count by hour_bucket | sort hour_bucket | rename hour_bucket AS _time | timechart span=5m values(count) AS "Unique Customers per Hour" Still doesnt return any results
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing... See more...
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing an issue with search results. When I run index=ilo | stats count by sourcetype, it correctly shows the count for ilo_log and ilo_error. Also, index=ilo | spath | table _raw sourcetype confirms logs are indexed with the correct sourcetype. However, when I search directly with index=ilo sourcetype=ilo_log, index=ilo sourcetype=ilo_error, or even index=ilo ilo_log, I get zero results. Strangely, sourcetype!=ilo_error returns all ilo_log events and the same for ilo_error. props.conf: [source::udp:5000] TRANSFORMS-set_sourcetype = set_ilo_error SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)  transforms.conf: [set_ilo_error] REGEX=(failure|failed) DEST_KEY=MetaData:Sourcetype FORMAT=sourcetype::ilo_error WRITE_META = true
This is a thread in which latest response was 11 years ago. I wouldn't count on a reasonable answer.
Dunno about Edge Processor but the general answer to any "inter-event" question regarding forwarding/filtering is "no". Splunk processes each event independently and doesn't carry any state from one ... See more...
Dunno about Edge Processor but the general answer to any "inter-event" question regarding forwarding/filtering is "no". Splunk processes each event independently and doesn't carry any state from one event to another (which makes sense when you realize that two subsequent events from the same source can be forwarded to two different receivers and end up on two completely different indexers).
We talked about this on Slack. Six indexers might or might not be sufficient for 1TB/day. Depends on the search load. Remember about two things: 1. Acceleration searches are quite heavy. They p... See more...
We talked about this on Slack. Six indexers might or might not be sufficient for 1TB/day. Depends on the search load. Remember about two things: 1. Acceleration searches are quite heavy. They plow through heaps of data to create DAS (which they write additionally stressing I/O). 2. Acceleration searches are at the far end of the priority queue when it comes to scheduling searches. So your acceleration searches might simply be running when there is already a whole lot of other things going in your system. That's one thing. Another thing is that CIM datamodels rely on tags and eventtypes which on the other hand rely on the raw data and how the fields are extracted. You might simply be having just "heavy" sourcetypes. You could try to dry-run an acceleration search as an ad-hoc search (get it with the acceleration_search option for the datamodel command) and inspect the job to see where it spends most of its time. It doesn't have to be directly tied to the datamodel or acceleration settings themselves (although it still might). Of course, you could try to run more shorter searches as was already suggested (and what is recommended in  https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/use-data-summaries-to-accelerate-searches/accelerate-data-models#id_80506253_eb97_4bd6_9185_739b6a6e6087__Advanced_configurations_for_persistently_accelerated_data_models) but this will probably not lower your i/o load.
As I said before - if it worked, good for you. And I'm not gonna go through your logs. For several reasons. Don't take it personally but this is a public forum where people give time whenever they ... See more...
As I said before - if it worked, good for you. And I'm not gonna go through your logs. For several reasons. Don't take it personally but this is a public forum where people give time whenever they want, the way they want. Answers is not a free support/consulting service. If you want something specific done - reach out to your local Splunk Partner or hire Professional Services. To put it bluntly - it costs money. Secondly, you've already done the migration and apparently you're running the new system as prod. My looking into your logs won't change anything. Thirdly, I might miss something, especially considering the first point. You asked, you have been advised, you did otherwise. I won't pat you now on the back and say "it's ok". It might. But don't have to. And I don't have spare time just to ease your mind, to be frank. I hope it is ok. But I won't check it.
Each of these answers are excellent. Thank you all for your input. For now, I'm going to go with the "was working within the last 30 days" search as that's what I need. Thanks!
Yes, "should" seems correct  However, I did change these levels and there was no effect. This led me to the documentation pointing at local changes on the indexers. The warnings are presented for ... See more...
Yes, "should" seems correct  However, I did change these levels and there was no effect. This led me to the documentation pointing at local changes on the indexers. The warnings are presented for indexers under the "Health of Distributed Splunk Deployment" pointing at indexers, though I am not sure where warnings are generated, to where they are "collected" before being presented in the search head cluster. I also cannot figure out where in the healtch.conf file these levels could/should be modified (health.conf | Splunk Docs) so I'm guessing it is likely somewhere else.
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a litt... See more...
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a little script. To build a grep based logic on a Heavy Forwarder there would have to be a long list of very particular "grep" strings not to loose ALL grep attempts. In a similar manner, commands like 'uname' and 'id' are even harder to filter out. The logic needed o reliably filter out only I/O generated by the script would be to find events with comm="script-name", get the pid value from that initial event and drop all events for the next say 10 seconds with a ppid that matches the pid. To make things complicated there is no control over the log/files on the endpoints, only what the universal forwarder is able to do then the heavy forwarder before the log in indexed. Is there any way to accomplish this kind of adaptive filtering/stateful cross-event logic in transit and under these conditions? Is this something that may be possible using the new and shiny Splunk Edge Processor once it is generally available?
The Search Head determines if the alert should be displayed or not so that is where the threshold is set.  Go to Settings->Health Report Manager to change the threshold.
No, bucket size should be fine - just need to play around with 3 DMA parameters only -  1. Backfill Range 2. Max Concurrent Searches 3. Max Summarization Search Time So, the strategy I suggested ... See more...
No, bucket size should be fine - just need to play around with 3 DMA parameters only -  1. Backfill Range 2. Max Concurrent Searches 3. Max Summarization Search Time So, the strategy I suggested previously was - "build short, build often". Check the attached file. The boxes in the Yellow are default - where with Max Concurrent Searches with 3 and Max Search Time as 3600, the 4th concurrent search will only start once 1st is completed (after 3600) - which is happening in your case. However, we can go with the other approach - in green - where with 4 concurrent searches and Max Search Time as 1200 - we can build the summaries faster - which will also respect the recent event faster. Please let me if any questions (and how the configuration changes goes!)
Hi @genesiusj  If your ITSI instance is separate to the SHC then there is no built-in feature that would replicate between the SHC and ITSI.  There are a number of apps on Splunkbase that do variou... See more...
Hi @genesiusj  If your ITSI instance is separate to the SHC then there is no built-in feature that would replicate between the SHC and ITSI.  There are a number of apps on Splunkbase that do various knowledge object management but I havent personally seen any that do this. How do you currently manage your tags.conf on the SHC? If these are managed on a search-head deployer and pushed to the SHC then you can install the same app on the ITSI SH (via appropriate deployment mechanism) but you do then have to ensure you push this out to ITSI when making changes to the app which is deployed to SHC.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @spisiakmi  Wow, Splunk 4! I dont really know where to start with this one, but what I would say is this sort of thing should probably be done with the assistance of Splunk Professional Services ... See more...
Hi @spisiakmi  Wow, Splunk 4! I dont really know where to start with this one, but what I would say is this sort of thing should probably be done with the assistance of Splunk Professional Services (PS). There are so many questions and caveats here. The only "off the shelf" supported option would be to go through the official supported upgrade path for each version as @PrewinThomas has mentioned - but then you also need to balance that with the version of OS that each version is on (e.g. when do you move it from Server 2016 to 2022 and continue the upgrade path). There are also factors like how many indexers you have and what the rest of the environment looks like, how does data get in to Splunk, and how would you manage the upgrade of any forwarders in the estate? There are a number of places in the upgrade journey where the structure of buckets changes. e.g. around 4.1/4.2 there are changes, then again at 7.2 and 8.x which is why the upgrade process is place. Therefore I wouldnt recommend just copying them from the old to the new infrastructure. Depending on the volume of data, one option might be to freeze out the data from your old infra and thaw it out into your new infra. This way the relevant changes are managed by Splunk. According to the docs it is possible to thaw pre 4.2 data into Splunk 9.x  - however there are warning about different architecture (is your Windows Server 2016 on 64-bit architecture?). Would you be looking to change the Splunk architecture (e.g utilise clustering) on the new infra? Ultimately there are a number of ways to do this, but I would strongly suggest speaking to a Splunk Partner and/or Splunk Professional Services to get this planned out. Lastly - In order to upgrade from Splunk 4 using the upgrade path, you will need lots of old versions which are no longer publicly available, so you will need to see if support can provide these, however I believe this may be unlikely without PS involvement.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, an... See more...
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, and other ITSI macros. Did someone create an app to handle this? Thanks and God bless, Genesius