All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs... See more...
Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs from FortiAnalyzer. We have the following add-ons installed: Fortinet FortiGate Add-on for Splunk (version 1.6.9) Fortinet FortiGate App for Splunk (version 1.6.4) The problem only affects a specific type of log from FortiAnalyzer: Logs from other FortiGates: These logs are forwarded to FortiAnalyzer and then to Splunk. They are working correctly, and the log time matches the Splunk event time. Local logs from FortiAnalyzer: This includes events like login, logout, and configuration changes on the FortiAnalyzer itself. For these logs, there is a 7-hour time difference between the log timestamp and the Splunk event time. This time discrepancy causes a significant problem. For example, if we create an alert for a configuration change on FortiAnalyzer, it will be triggered 7 hours late, making real-time monitoring impossible (As shown in this picture, using the same SPL query, searching by Splunk's event time returns results, while searching by the actual timestamp in the logs returns nothing.)  
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it sh... See more...
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it shouldn't be issue. But if you are using also s2s then there could be some challenges. And at least MC gives you a warnings if HFs are added there and those are newer than MC itself. If you will need help from Splunk Support then this could be issue as that combination is not officially supported.  Anyhow you should update at least 9.2.x or 9.3. asap. Here is link to support times for Splunk core https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-ente... See more...
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/compatibility-matrix/splunk-products-version-compatibility/compatibility-between-forwarders-and-splunk-enterprise-indexers which lists the officially supported combinations of HF->IDX versions Which versions are your HF and IDX running?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2,... See more...
With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2, I cannot remember exact version). After your indexers and cm are that level HFs and UFs can be newer than indexers and CM and other splunk servers.  So in your situation when you have 8.x.x all HFs and UFs should be max same version than those servers are.  Anyhow those versions are already out of support, so you should upgrade those as soon as possible to supported version. Probably 9.4.4 is currently best option. Don’t go to 10.0.0 as it’s too new for production use!
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s d... See more...
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s directly from your dashboard then you will need a custom command which can be run via a standard Splunk SPL search, Im not aware of an existing app which provides this functionality and couldnt find one on Splunkbase either - therefore you would need to create a custom app with a custom command that interacts with your K8s cluster. Once you have this you can include it in your dashboard using standard SPL.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your expe... See more...
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your experience, have there been situations where a Heavy Forwarder (HF) was running a significantly higher version than the indexers? Specifically, I plan to run my HF on at least version 9.2, up to 9.4. However, I’m not sure how well that will work with my indexers on version 8.2.12. My HF is used only for HEC (HTTP Event Collector).
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should alre... See more...
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should already be pretty much what you wanted.
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives ev... See more...
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives events that are otherwise not associated with a valid index. * If you do not specify a valid index with this setting, such events are dropped entirely. * Routes the following kinds of events to the specified index: * events with a non-existent index specified at an input layer, like an invalid "index" setting in inputs.conf * events with a non-existent index computed at index-time, like an invalid _MetaData:Index value set from a "FORMAT" setting in transforms.conf * You must set 'lastChanceIndex' to an existing, enabled index. Splunk software cannot start otherwise. * If set to "default", then the default index specified by the 'defaultDatabase' setting is used as a last chance index. * Default: empty string
For some reason I'm not surprised this with Forti products
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can pars... See more...
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can parse out the timestamp in this specific format (which I doubt but I don't have much experience here), it's left for Splunk to do.
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be ab... See more...
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be able to connect to the Kubernetes API to run kubectl commands, etc. This way we would be able to display the current settings for Kubernetes deploys such as Auto Scaling config, etc. This is how the data sources currently is configured. In this list we would like to be able to add more types of data sources. Any ideas on this?
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props... See more...
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props_conf_order.pdf Based on that diagram, ANNOTATE_PUNCT is after Splunk has apply other props and transforms stuff and events cannot go backwards on ingestion pipeline.
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service aft... See more...
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service after each step or otherwise it didn't do needed conversions between old to new version!
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 ... See more...
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 https://github.com/ryanadler/downloadSplunk
you can ignore search= command and the reason i am using dedup is because there are large number of devices . like 1 device has like 20 25 events .   
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately curre... See more...
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately currently there is no OIDC related stuff available. Then there is Customer Advisory Programs where customers can give direct feedback to Splunk's PMs. There are some roadmap presentations in every session (kept quarterly or something like that). In those you have possibility to give direct hopes to PMs what and why you are needing this and that.
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Als... See more...
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Also are your splunk servers and SC4S in same TZ? There are at least two common syslog protocols: RFC3164 aka BSD syslog  RFC5424  The newer (RFC5424) contains TZ information on every event, but old one has only date and time, but not TZ information.  Check what those sources are used and if possible use RFC5424 version. If you cannot use that then you must add TZ information to those on SC4S or Splunk HEC side. Here is one instructions for it https://splunk.my.site.com/customer/s/article/Splunk-Connect-for-Syslog-Events
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love you... See more...
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love your input on the following: What would be a recommended build configuration (CPU, RAM, disc) to support this kind of usage in playbooks? Does allowing multiple users to run playbooks simultaneously change the sizing recommendations? Any experience with tuning playbook runners or autoscaling settings to handle user-driven playbook execution effectively? Any advice or sizing tips from your deployments would be much appreciated. Thanks in advance!
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see i... See more...
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see if the events are "properly" indexed into the future. 2. It seems like a timezone issue. What timezone your source is in? What timezone your SC4S runs in? What timezone your Splunk indexers (or HF if you're sending to HF) run in?
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search ... See more...
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search DeviceName=something If you add this condition to the initial search you will be processing just a small subset of your events, not the whole lot. 2. The use of dedup. Are you absolutely sure you want to use this command? It keeps just first result with given field(s).