All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2,... See more...
With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2, I cannot remember exact version). After your indexers and cm are that level HFs and UFs can be newer than indexers and CM and other splunk servers.  So in your situation when you have 8.x.x all HFs and UFs should be max same version than those servers are.  Anyhow those versions are already out of support, so you should upgrade those as soon as possible to supported version. Probably 9.4.4 is currently best option. Don’t go to 10.0.0 as it’s too new for production use!
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s d... See more...
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s directly from your dashboard then you will need a custom command which can be run via a standard Splunk SPL search, Im not aware of an existing app which provides this functionality and couldnt find one on Splunkbase either - therefore you would need to create a custom app with a custom command that interacts with your K8s cluster. Once you have this you can include it in your dashboard using standard SPL.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your expe... See more...
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your experience, have there been situations where a Heavy Forwarder (HF) was running a significantly higher version than the indexers? Specifically, I plan to run my HF on at least version 9.2, up to 9.4. However, I’m not sure how well that will work with my indexers on version 8.2.12. My HF is used only for HEC (HTTP Event Collector).
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should alre... See more...
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should already be pretty much what you wanted.
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives ev... See more...
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives events that are otherwise not associated with a valid index. * If you do not specify a valid index with this setting, such events are dropped entirely. * Routes the following kinds of events to the specified index: * events with a non-existent index specified at an input layer, like an invalid "index" setting in inputs.conf * events with a non-existent index computed at index-time, like an invalid _MetaData:Index value set from a "FORMAT" setting in transforms.conf * You must set 'lastChanceIndex' to an existing, enabled index. Splunk software cannot start otherwise. * If set to "default", then the default index specified by the 'defaultDatabase' setting is used as a last chance index. * Default: empty string
For some reason I'm not surprised this with Forti products
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can pars... See more...
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can parse out the timestamp in this specific format (which I doubt but I don't have much experience here), it's left for Splunk to do.
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be ab... See more...
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be able to connect to the Kubernetes API to run kubectl commands, etc. This way we would be able to display the current settings for Kubernetes deploys such as Auto Scaling config, etc. This is how the data sources currently is configured. In this list we would like to be able to add more types of data sources. Any ideas on this?
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props... See more...
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props_conf_order.pdf Based on that diagram, ANNOTATE_PUNCT is after Splunk has apply other props and transforms stuff and events cannot go backwards on ingestion pipeline.
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service aft... See more...
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service after each step or otherwise it didn't do needed conversions between old to new version!
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 ... See more...
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 https://github.com/ryanadler/downloadSplunk
you can ignore search= command and the reason i am using dedup is because there are large number of devices . like 1 device has like 20 25 events .   
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately curre... See more...
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately currently there is no OIDC related stuff available. Then there is Customer Advisory Programs where customers can give direct feedback to Splunk's PMs. There are some roadmap presentations in every session (kept quarterly or something like that). In those you have possibility to give direct hopes to PMs what and why you are needing this and that.
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Als... See more...
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Also are your splunk servers and SC4S in same TZ? There are at least two common syslog protocols: RFC3164 aka BSD syslog  RFC5424  The newer (RFC5424) contains TZ information on every event, but old one has only date and time, but not TZ information.  Check what those sources are used and if possible use RFC5424 version. If you cannot use that then you must add TZ information to those on SC4S or Splunk HEC side. Here is one instructions for it https://splunk.my.site.com/customer/s/article/Splunk-Connect-for-Syslog-Events
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love you... See more...
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love your input on the following: What would be a recommended build configuration (CPU, RAM, disc) to support this kind of usage in playbooks? Does allowing multiple users to run playbooks simultaneously change the sizing recommendations? Any experience with tuning playbook runners or autoscaling settings to handle user-driven playbook execution effectively? Any advice or sizing tips from your deployments would be much appreciated. Thanks in advance!
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see i... See more...
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see if the events are "properly" indexed into the future. 2. It seems like a timezone issue. What timezone your source is in? What timezone your SC4S runs in? What timezone your Splunk indexers (or HF if you're sending to HF) run in?
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search ... See more...
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search DeviceName=something If you add this condition to the initial search you will be processing just a small subset of your events, not the whole lot. 2. The use of dedup. Are you absolutely sure you want to use this command? It keeps just first result with given field(s).
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the ... See more...
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the device is active now . But this search shows Inactive . How can I get latest result . index=endpoint_defender source="AdvancedHunting-DeviceInfo" | dedup DeviceName | search DeviceType=Workstation OR DeviceType= Server | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | rename "3-Letter-Code" as CC | search DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR  SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState