All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you show sample of raw syslog events before those have sent to SC4S? Can you also check the whole event from FortiAnalyzer (local logs) and logs from other FortiGates sent to FA and check if ther... See more...
Can you show sample of raw syslog events before those have sent to SC4S? Can you also check the whole event from FortiAnalyzer (local logs) and logs from other FortiGates sent to FA and check if there was some other field where those times are set correctly? It's so long time ago when I have looked those that I cannot remember what all fields there are If I recall right there are many/some information several times in one event with little bit different format. Maybe there was another field which contains also TZ information?
Hi @Sot_Sochetra  Use two separate transforms that match the metadata field index and send to different TCP groups: # props.conf (on the Heavy Forwarder) [default] TRANSFORMS-routing = route_ad_to_... See more...
Hi @Sot_Sochetra  Use two separate transforms that match the metadata field index and send to different TCP groups: # props.conf (on the Heavy Forwarder) [default] TRANSFORMS-routing = route_ad_to_idx01, route_fs_to_idx02 # transforms.conf (on the Heavy Forwarder) [route_ad_to_idx01] SOURCE_KEY = MetaData:Index REGEX = ^ad_index$ DEST_KEY = _TCP_ROUTING FORMAT = index01 [route_fs_to_idx02] SOURCE_KEY = MetaData:Index REGEX = ^fs_index$ DEST_KEY = _TCP_ROUTING FORMAT = index02   Applying this to your HF with the outputs.conf you've already got should route the fs/ad indexes as required.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Sorry as per usual late to the party. Yes have to agree with the painful to amend checkpoints we use ansible to replicate checkpoints between nodes excuse the terrible pasting of the yaml config. Yo... See more...
Sorry as per usual late to the party. Yes have to agree with the painful to amend checkpoints we use ansible to replicate checkpoints between nodes excuse the terrible pasting of the yaml config. You will also note we have the delete before the post. I'm sure the new version does all this but this was created pre later releases & had to be amended when the updates occurred   - hosts: "{{ splunk_node_primary }}" gather_facts: no become: no tasks: - name: Enumerate db connect primary kvstore names ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_primary_name failed_when: db_connect_input_primary_name.status == "404" - name: Set fact for names ansible.builtin.set_fact: db_connect_prim_name: "{{ db_connect_prim_name | default([]) + [ item ] }}" with_items: "{{ db_connect_input_primary_name.json | json_query(inputName_key) }}" vars: inputName_key: "[*].{inputName: inputName}" - name: Set fact last ansible.builtin.set_fact: db_connect_prim_name_unique: "{{ db_connect_prim_name | unique }}" - name: Repeat block DB Connect ansible.builtin.include_tasks: db_connect_repeat_block.yml loop: "{{ db_connect_prim_name_unique | default([]) }}" Then the repeat block  --- - name: Enumerate db connect primary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_primary_list failed_when: db_connect_primary_list.status == "404" - name: Enumerate db connect primary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_primary_value failed_when: db_connect_input_primary_value.status == "404" - name: Set fact ansible.builtin.set_fact: db_connect_input_chkpt_value: "{{ db_connect_input_chkpt_value | default([]) + [ inp_chkpt_var ] }}" with_items: "{{ db_connect_input_primary_value.json | json_query(inputName_value) }}" vars: inputName_value: "[?inputName=='{{ item.inputName }}'].{inputName: inputName, value: value, appVersion: appVersion, columnType: columnType, timestamp: timestamp}" loop_control: label: "{{ inp_chkpt_var }}" loop_var: inp_chkpt_var - name: Set fact last ansible.builtin.set_fact: db_connect_input_chkpt_val: "{{ db_connect_input_chkpt_value | list | last }}" - name: Set fact for new Chkpt ansible.builtin.set_fact: init_chkpt_value: "{{ db_connect_primary_list.json | regex_replace('.checkpoint.: None,', \"'checkpoint': %s,\" % db_connect_input_chkpt_val , multiline=True, ignorecase=True) }}" - name: Set fact for disabled ansible.builtin.set_fact: init_chkpt_value_disabled: "{{ init_chkpt_value | regex_replace('.disabled.: false,', \"'disabled': true,\", multiline=True, ignorecase=True) }}" - name: Enumerate db connect secondary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_secondary_value failed_when: db_connect_input_secondary_value.status == "404" delegate_to: "{{ splunk_node_secondary }}" - name: Set fact for secondary keys ansible.builtin.set_fact: db_connect_second_chkpt_key: "{{ db_connect_second_chkpt_key | default([]) + [ item ] }}" with_items: "{{ db_connect_input_secondary_value.json | json_query(inputName_key) }}" vars: inputName_key: "[?inputName=='{{ item.inputName }}'].{_key: _key}" - name: Show secondary keys ansible.builtin.debug: msg: "{{ [ inp_second_key ] }}" loop: "{{ db_connect_second_chkpt_key | default([]) }}" loop_control: label: "{{ inp_second_key }}" loop_var: inp_second_key when: db_connect_second_chkpt_key is defined - name: Delete db connect secondary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input/{{ inp_second_key._key }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: DELETE return_content: yes delegate_to: "{{ splunk_node_secondary }}" loop: "{{ db_connect_second_chkpt_key | default([]) }}" loop_control: label: "{{ inp_second_key }}" loop_var: inp_second_key when: db_connect_second_chkpt_key is defined - name: Enumerate db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes status_code: - 404 - 200 - 500 delegate_to: "{{ splunk_node_secondary }}" register: db_connect_primary_check - name: Set fact for secondary keys blank ansible.builtin.set_fact: db_connect_second_chkpt_key: [] - name: Delete db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: DELETE return_content: yes status_code: - 204 delegate_to: "{{ splunk_node_secondary }}" when: '"errors" not in db_connect_primary_check.content' - name: Post db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: POST body: "{{ init_chkpt_value_disabled }}" return_content: yes body_format: json register: db_connect_secondary_post retries: 3 delay: 10 until: "db_connect_secondary_post.status == 200" delegate_to: "{{ splunk_node_secondary }}"
Hi @Splunk_2188  The 200 response should be a success, not an error. Can I confirm you are using https://splunkbase.splunk.com/app/5882 / https://github.com/splunk-soar-connectors/bmcremedy app? If... See more...
Hi @Splunk_2188  The 200 response should be a success, not an error. Can I confirm you are using https://splunkbase.splunk.com/app/5882 / https://github.com/splunk-soar-connectors/bmcremedy app? If so unfortunately it only supports user/password authentication and not token based authentication. There is a pull request to add OAUTH authentication but not basic token auth. (https://github.com/splunk-soar-connectors/bmcremedy/pull/11) This is a Splunk supported app, so feel free to raise a request via support to see if token auth is on the roadmap.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route... See more...
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route logs to different indexers based on the index name, for example: Logs from AD servers should go to indexer01, using index=ad_index Logs from File servers should go to indexer02, using index=fs_index Here is my current config on the HF  props.conf [default] TRANSFORMS-routing = route_to_index02 transforms.conf [route_to_index02] REGEX = ^fs_index$|^ad_index$ DEST_KEY = _TCP_ROUTING FORMAT = index02 outputs.conf [tcpout] [tcpout:index01] server = <IP>:9997 [tcpout:index02] server = <IP>:9997 And here is the example inputs.conf from AD Server [WinEventLog://Security] disabled = 0 index = ad_index sourcetype = WinEventLog:Security [WinEventLog://System] disabled = 0 index = ad_index sourcetype = WinEventLog:System But right now, everything is going to index02, regardless of the index name. So my question is  How can I modify props.conf and transforms.conf on the HF so that: ad_index logs go to index01 fs_index logs go to index02 Thank in advance for any help
Hi @Ramachandran  The recommended hardware specification for SOAR On-Premise is  Processor 1 server-class CPU, 4 to 8 cores Memory Minimum of 16GB RAM, 32GB recommended Storage Splunk S... See more...
Hi @Ramachandran  The recommended hardware specification for SOAR On-Premise is  Processor 1 server-class CPU, 4 to 8 cores Memory Minimum of 16GB RAM, 32GB recommended Storage Splunk SOAR (On-premises) needs storage for multiple volumes: Splunk SOAR (On-premises) home directory also known as <$PHANTOM_HOME>: 500GiB mounted as either /opt/phantom/ or as <$PHANTOM_HOME> Phantom data: 500GiB mounted as either /opt/phantom/data or <$PHANTOM_HOME>/data   The PostgreSQL database will be stored underneath the Phantom Data mount at: <$PHANTOM_HOME>/data/db File share volumes: 500GiB mounted as /opt/phantom/vault or <$PHANTOM_HOME>/vault Disk space requirements vary depending on the volume of data ingested and the size of your production environment. For more info check out https://help.splunk.com/en/splunk-soar/soar-on-premises/install-and-upgrade-soar-on-premises/6.4.1/system-requirements/system-requirements-for-production-use Note that 4vCPU doesnt necessarily = 1 Server Class CPU with 4 Cores as per the spec. There are no specific requirements based on the number of playbooks but using the referenced hardware spec should cover full production use of SOAR and thus should handle your multiple playbook scenario.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs... See more...
Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs from FortiAnalyzer. We have the following add-ons installed: Fortinet FortiGate Add-on for Splunk (version 1.6.9) Fortinet FortiGate App for Splunk (version 1.6.4) The problem only affects a specific type of log from FortiAnalyzer: Logs from other FortiGates: These logs are forwarded to FortiAnalyzer and then to Splunk. They are working correctly, and the log time matches the Splunk event time. Local logs from FortiAnalyzer: This includes events like login, logout, and configuration changes on the FortiAnalyzer itself. For these logs, there is a 7-hour time difference between the log timestamp and the Splunk event time. This time discrepancy causes a significant problem. For example, if we create an alert for a configuration change on FortiAnalyzer, it will be triggered 7 hours late, making real-time monitoring impossible (As shown in this picture, using the same SPL query, searching by Splunk's event time returns results, while searching by the actual timestamp in the logs returns nothing.)  
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it sh... See more...
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it shouldn't be issue. But if you are using also s2s then there could be some challenges. And at least MC gives you a warnings if HFs are added there and those are newer than MC itself. If you will need help from Splunk Support then this could be issue as that combination is not officially supported.  Anyhow you should update at least 9.2.x or 9.3. asap. Here is link to support times for Splunk core https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-ente... See more...
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/compatibility-matrix/splunk-products-version-compatibility/compatibility-between-forwarders-and-splunk-enterprise-indexers which lists the officially supported combinations of HF->IDX versions Which versions are your HF and IDX running?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2,... See more...
With earlier versions the rule was that indexers must be the newest versions. HFs and UFs which were connected can be lower. And same for UF vs HF. This has been changed with 9.x (maybe x was 3 or 2, I cannot remember exact version). After your indexers and cm are that level HFs and UFs can be newer than indexers and CM and other splunk servers.  So in your situation when you have 8.x.x all HFs and UFs should be max same version than those servers are.  Anyhow those versions are already out of support, so you should upgrade those as soon as possible to supported version. Probably 9.4.4 is currently best option. Don’t go to 10.0.0 as it’s too new for production use!
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s d... See more...
Hi @dmoberg  The only 2 main data sources available as of Splunk 10.0 are Standard SPL Searches (either via Base/Chained or saved search) and Splunk Observability.  If you're wanting to query K8s directly from your dashboard then you will need a custom command which can be run via a standard Splunk SPL search, Im not aware of an existing app which provides this functionality and couldnt find one on Splunkbase either - therefore you would need to create a custom app with a custom command that interacts with your K8s cluster. Once you have this you can include it in your dashboard using standard SPL.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your expe... See more...
Hello @livehybrid  I’m sincerely grateful for your response. Your links were very helpful — I was able to locate all the versions I needed for my test environment. Thank you. May I ask—in your experience, have there been situations where a Heavy Forwarder (HF) was running a significantly higher version than the indexers? Specifically, I plan to run my HF on at least version 9.2, up to 9.4. However, I’m not sure how well that will work with my indexers on version 8.2.12. My HF is used only for HEC (HTTP Event Collector).
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should alre... See more...
Yes, but do you know what dedup does? With a search like that you are getting only a latest (since Splunk returns events in reverse chronological order) event for each DeviceName. So that should already be pretty much what you wanted.
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives ev... See more...
If you cannot find those events from any indexes, are you defined lastChangeIndex in your indexes.conf? If not then it's time to add it. lastChanceIndex = <index name> * An index that receives events that are otherwise not associated with a valid index. * If you do not specify a valid index with this setting, such events are dropped entirely. * Routes the following kinds of events to the specified index: * events with a non-existent index specified at an input layer, like an invalid "index" setting in inputs.conf * events with a non-existent index computed at index-time, like an invalid _MetaData:Index value set from a "FORMAT" setting in transforms.conf * You must set 'lastChanceIndex' to an existing, enabled index. Splunk software cannot start otherwise. * If set to "default", then the default index specified by the 'defaultDatabase' setting is used as a last chance index. * Default: empty string
For some reason I'm not surprised this with Forti products
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can pars... See more...
The event as shown (and as I remember Forti products) doesn't actually conform to either RFC - it's not strictly a syslog message. It's just "something" sent over the network. So unless SC4S can parse out the timestamp in this specific format (which I doubt but I don't have much experience here), it's left for Splunk to do.
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be ab... See more...
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be able to connect to the Kubernetes API to run kubectl commands, etc. This way we would be able to display the current settings for Kubernetes deploys such as Auto Scaling config, etc. This is how the data sources currently is configured. In this list we would like to be able to add more types of data sources. Any ideas on this?
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props... See more...
I think that you cannot do this with props and transforms. The reason for that is the order how those different processors are done in ingestion phase. See e.g https://www.aplura.com/assets/pdf/props_conf_order.pdf Based on that diagram, ANNOTATE_PUNCT is after Splunk has apply other props and transforms stuff and events cannot go backwards on ingestion pipeline.
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service aft... See more...
This is incorrect information. You cannot update directly from 8.1.x to 9.4.x. You must do it as @livehybrid told. This rule is also defined on splunk docs. Also you must start your splunk service after each step or otherwise it didn't do needed conversions between old to new version!
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 ... See more...
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 https://github.com/ryanadler/downloadSplunk