All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are looking to upgrade our Splunk instance to the latest version.  I would like to download install manuals for Spunk Enterprise v10 as well as other documents.  I noticed on the new documents por... See more...
We are looking to upgrade our Splunk instance to the latest version.  I would like to download install manuals for Spunk Enterprise v10 as well as other documents.  I noticed on the new documents portal there is no longer an option of downloadable PDFs for the material.  Has anyone else encountered this?  Is this no longer an option with the new portal?  Appreciate any insight.
Hi @AleCanzo  Have you setup the visualizations.conf etc? There is a good tutorial at https://docs.splunk.com/Documentation/Splunk/9.4.2/AdvancedDev/CustomVizTutorial which might be worth going thro... See more...
Hi @AleCanzo  Have you setup the visualizations.conf etc? There is a good tutorial at https://docs.splunk.com/Documentation/Splunk/9.4.2/AdvancedDev/CustomVizTutorial which might be worth going through if you havent already. If you've done these already then I would hit the _bump endpoint https://yourSplunk:8000/en-US/_bump  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi guys, I'm trying to put a 3d visualization that i've made with three.js in my splunk dashboard, but it doesn't work. I've put my main.js in .../appserver/static and the html in an html in my das... See more...
Hi guys, I'm trying to put a 3d visualization that i've made with three.js in my splunk dashboard, but it doesn't work. I've put my main.js in .../appserver/static and the html in an html in my dashboard. Any docs/recommendations? Thanks, Alecanzo.
Hey @SN1, What @PickleRick said is correct. You'll be receiving the latest result only since you're using dedup. However, since it is an expensive command, you can use transforming command like stat... See more...
Hey @SN1, What @PickleRick said is correct. You'll be receiving the latest result only since you're using dedup. However, since it is an expensive command, you can use transforming command like stats as well to fetch the latest results. Your query should look something like below:   index=endpoint_defender source="AdvancedHunting-DeviceInfo" | search (DeviceType=Workstation OR DeviceType= Server) AND DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | rename "3-Letter-Code" as CC | stats latest(SensorHealthState) as latest_SensorHealthState by DeviceName Region ... The latest function will always fetch the latest value of the field passed as an argument on the basis of time. You can add the fields that you want to group the results in the by clause. Hope this helps you optimize your query.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!
Ok. If possible you should participate Data Administration class or something similar. It contains basic stuff how to ingest data into Splunk  https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/... See more...
Ok. If possible you should participate Data Administration class or something similar. It contains basic stuff how to ingest data into Splunk  https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/me/learningeventdetail;spf-url=common%2Fledetail%2Fcours000000000003499%3FfromAutoSuggest%3Dtrue https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-data-administration-course-description.pdf Depending on your needs you could also parse that data also on indexers instead of use separate HFs for that. It's hard to say which option is better for you as we don't know enough well your needs. Also you should think again if you need clustered indexers instead of use separate. That will make your environment more robust than what you have now. Of course it needs more disk space etc. but I'm quite sure that it's worth of those additional costs. You will save those when you have 1st issue/crash with your individual indexer... I think that you should take some courses or learn same information by yourself from net or take some local Splunk company/consultant/architect to help you to define and setup your environment. I'm quite sure that you will save more money that way than starting from scratch w/o enough knowledge and later setup it again.
Thank you for the reply I will test it out tomorrow as I am out of work right now and I will tell you how it goes .
Thanks for the advice! The main reason I’m using a Heavy Forwarder is because from what I’ve read, it can parse data before sending it to the indexer. For example, I’m planning to collect logs from s... See more...
Thanks for the advice! The main reason I’m using a Heavy Forwarder is because from what I’ve read, it can parse data before sending it to the indexer. For example, I’m planning to collect logs from some network devices (like firewalls or routers), and I thought sending them through the HF would help with parsing or enriching the data first. Also, I’m still pretty new to Splunk, so sorry if I’m misunderstanding anything or asking something obvious.  Best regard Chetra
You can edit your message and insert the ansible part in either a preformatted paragraph or a code box. Then it will not get butchered (most importantly - the indents will be preserved).
OK. Let me get this straight. You have a single stream of events you're receving on your SC4S from the FortiAnalyzer and some of those events come directly from the FortiAnalyzer while other ones ar... See more...
OK. Let me get this straight. You have a single stream of events you're receving on your SC4S from the FortiAnalyzer and some of those events come directly from the FortiAnalyzer while other ones are forwarded by FortiAnalyzer from FortiGates? Is that correct? I'm not aware that - without additional bending over backwards - SC4S can treat different events within single event stream differently. Anyway, how is the timestamp rendered for both of those kinds of events? (in the original raw events)
Why you want to use HF between indexers and UF? The best practices is send events directly from UF to IDX. If you can do that way just add another outputs.conf to all UF:s and then define used target... See more...
Why you want to use HF between indexers and UF? The best practices is send events directly from UF to IDX. If you can do that way just add another outputs.conf to all UF:s and then define used targets in inputs.conf. That's much easier and robust way than using HF between UFs and IDXs. If you must use HF (e.g. security policy) then you should have at least two HW making routing between UFs and IDXs.
Can you show sample of raw syslog events before those have sent to SC4S? Can you also check the whole event from FortiAnalyzer (local logs) and logs from other FortiGates sent to FA and check if ther... See more...
Can you show sample of raw syslog events before those have sent to SC4S? Can you also check the whole event from FortiAnalyzer (local logs) and logs from other FortiGates sent to FA and check if there was some other field where those times are set correctly? It's so long time ago when I have looked those that I cannot remember what all fields there are If I recall right there are many/some information several times in one event with little bit different format. Maybe there was another field which contains also TZ information?
Hi @Sot_Sochetra  Use two separate transforms that match the metadata field index and send to different TCP groups: # props.conf (on the Heavy Forwarder) [default] TRANSFORMS-routing = route_ad_to_... See more...
Hi @Sot_Sochetra  Use two separate transforms that match the metadata field index and send to different TCP groups: # props.conf (on the Heavy Forwarder) [default] TRANSFORMS-routing = route_ad_to_idx01, route_fs_to_idx02 # transforms.conf (on the Heavy Forwarder) [route_ad_to_idx01] SOURCE_KEY = MetaData:Index REGEX = ^ad_index$ DEST_KEY = _TCP_ROUTING FORMAT = index01 [route_fs_to_idx02] SOURCE_KEY = MetaData:Index REGEX = ^fs_index$ DEST_KEY = _TCP_ROUTING FORMAT = index02   Applying this to your HF with the outputs.conf you've already got should route the fs/ad indexes as required.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Sorry as per usual late to the party. Yes have to agree with the painful to amend checkpoints we use ansible to replicate checkpoints between nodes excuse the terrible pasting of the yaml config. Yo... See more...
Sorry as per usual late to the party. Yes have to agree with the painful to amend checkpoints we use ansible to replicate checkpoints between nodes excuse the terrible pasting of the yaml config. You will also note we have the delete before the post. I'm sure the new version does all this but this was created pre later releases & had to be amended when the updates occurred   - hosts: "{{ splunk_node_primary }}" gather_facts: no become: no tasks: - name: Enumerate db connect primary kvstore names ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_primary_name failed_when: db_connect_input_primary_name.status == "404" - name: Set fact for names ansible.builtin.set_fact: db_connect_prim_name: "{{ db_connect_prim_name | default([]) + [ item ] }}" with_items: "{{ db_connect_input_primary_name.json | json_query(inputName_key) }}" vars: inputName_key: "[*].{inputName: inputName}" - name: Set fact last ansible.builtin.set_fact: db_connect_prim_name_unique: "{{ db_connect_prim_name | unique }}" - name: Repeat block DB Connect ansible.builtin.include_tasks: db_connect_repeat_block.yml loop: "{{ db_connect_prim_name_unique | default([]) }}" Then the repeat block  --- - name: Enumerate db connect primary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_primary_list failed_when: db_connect_primary_list.status == "404" - name: Enumerate db connect primary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_primary_value failed_when: db_connect_input_primary_value.status == "404" - name: Set fact ansible.builtin.set_fact: db_connect_input_chkpt_value: "{{ db_connect_input_chkpt_value | default([]) + [ inp_chkpt_var ] }}" with_items: "{{ db_connect_input_primary_value.json | json_query(inputName_value) }}" vars: inputName_value: "[?inputName=='{{ item.inputName }}'].{inputName: inputName, value: value, appVersion: appVersion, columnType: columnType, timestamp: timestamp}" loop_control: label: "{{ inp_chkpt_var }}" loop_var: inp_chkpt_var - name: Set fact last ansible.builtin.set_fact: db_connect_input_chkpt_val: "{{ db_connect_input_chkpt_value | list | last }}" - name: Set fact for new Chkpt ansible.builtin.set_fact: init_chkpt_value: "{{ db_connect_primary_list.json | regex_replace('.checkpoint.: None,', \"'checkpoint': %s,\" % db_connect_input_chkpt_val , multiline=True, ignorecase=True) }}" - name: Set fact for disabled ansible.builtin.set_fact: init_chkpt_value_disabled: "{{ init_chkpt_value | regex_replace('.disabled.: false,', \"'disabled': true,\", multiline=True, ignorecase=True) }}" - name: Enumerate db connect secondary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes register: db_connect_input_secondary_value failed_when: db_connect_input_secondary_value.status == "404" delegate_to: "{{ splunk_node_secondary }}" - name: Set fact for secondary keys ansible.builtin.set_fact: db_connect_second_chkpt_key: "{{ db_connect_second_chkpt_key | default([]) + [ item ] }}" with_items: "{{ db_connect_input_secondary_value.json | json_query(inputName_key) }}" vars: inputName_key: "[?inputName=='{{ item.inputName }}'].{_key: _key}" - name: Show secondary keys ansible.builtin.debug: msg: "{{ [ inp_second_key ] }}" loop: "{{ db_connect_second_chkpt_key | default([]) }}" loop_control: label: "{{ inp_second_key }}" loop_var: inp_second_key when: db_connect_second_chkpt_key is defined - name: Delete db connect secondary kvstore values ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input/{{ inp_second_key._key }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: DELETE return_content: yes delegate_to: "{{ splunk_node_secondary }}" loop: "{{ db_connect_second_chkpt_key | default([]) }}" loop_control: label: "{{ inp_second_key }}" loop_var: inp_second_key when: db_connect_second_chkpt_key is defined - name: Enumerate db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: GET return_content: yes status_code: - 404 - 200 - 500 delegate_to: "{{ splunk_node_secondary }}" register: db_connect_primary_check - name: Set fact for secondary keys blank ansible.builtin.set_fact: db_connect_second_chkpt_key: [] - name: Delete db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs/{{ item.inputName }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: DELETE return_content: yes status_code: - 204 delegate_to: "{{ splunk_node_secondary }}" when: '"errors" not in db_connect_primary_check.content' - name: Post db connect secondary inputs ansible.builtin.uri: url: "https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/inputs" user: "{{ ansible_user }}" password: "{{ ansible_password }}" validate_certs: no method: POST body: "{{ init_chkpt_value_disabled }}" return_content: yes body_format: json register: db_connect_secondary_post retries: 3 delay: 10 until: "db_connect_secondary_post.status == 200" delegate_to: "{{ splunk_node_secondary }}"
Hi @Splunk_2188  The 200 response should be a success, not an error. Can I confirm you are using https://splunkbase.splunk.com/app/5882 / https://github.com/splunk-soar-connectors/bmcremedy app? If... See more...
Hi @Splunk_2188  The 200 response should be a success, not an error. Can I confirm you are using https://splunkbase.splunk.com/app/5882 / https://github.com/splunk-soar-connectors/bmcremedy app? If so unfortunately it only supports user/password authentication and not token based authentication. There is a pull request to add OAUTH authentication but not basic token auth. (https://github.com/splunk-soar-connectors/bmcremedy/pull/11) This is a Splunk supported app, so feel free to raise a request via support to see if token auth is on the roadmap.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route... See more...
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route logs to different indexers based on the index name, for example: Logs from AD servers should go to indexer01, using index=ad_index Logs from File servers should go to indexer02, using index=fs_index Here is my current config on the HF  props.conf [default] TRANSFORMS-routing = route_to_index02 transforms.conf [route_to_index02] REGEX = ^fs_index$|^ad_index$ DEST_KEY = _TCP_ROUTING FORMAT = index02 outputs.conf [tcpout] [tcpout:index01] server = <IP>:9997 [tcpout:index02] server = <IP>:9997 And here is the example inputs.conf from AD Server [WinEventLog://Security] disabled = 0 index = ad_index sourcetype = WinEventLog:Security [WinEventLog://System] disabled = 0 index = ad_index sourcetype = WinEventLog:System But right now, everything is going to index02, regardless of the index name. So my question is  How can I modify props.conf and transforms.conf on the HF so that: ad_index logs go to index01 fs_index logs go to index02 Thank in advance for any help
Hi @Ramachandran  The recommended hardware specification for SOAR On-Premise is  Processor 1 server-class CPU, 4 to 8 cores Memory Minimum of 16GB RAM, 32GB recommended Storage Splunk S... See more...
Hi @Ramachandran  The recommended hardware specification for SOAR On-Premise is  Processor 1 server-class CPU, 4 to 8 cores Memory Minimum of 16GB RAM, 32GB recommended Storage Splunk SOAR (On-premises) needs storage for multiple volumes: Splunk SOAR (On-premises) home directory also known as <$PHANTOM_HOME>: 500GiB mounted as either /opt/phantom/ or as <$PHANTOM_HOME> Phantom data: 500GiB mounted as either /opt/phantom/data or <$PHANTOM_HOME>/data   The PostgreSQL database will be stored underneath the Phantom Data mount at: <$PHANTOM_HOME>/data/db File share volumes: 500GiB mounted as /opt/phantom/vault or <$PHANTOM_HOME>/vault Disk space requirements vary depending on the volume of data ingested and the size of your production environment. For more info check out https://help.splunk.com/en/splunk-soar/soar-on-premises/install-and-upgrade-soar-on-premises/6.4.1/system-requirements/system-requirements-for-production-use Note that 4vCPU doesnt necessarily = 1 Server Class CPU with 4 Cores as per the spec. There are no specific requirements based on the number of playbooks but using the referenced hardware spec should cover full production use of SOAR and thus should handle your multiple playbook scenario.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs... See more...
Hello, In our environment, we have Splunk Cloud, on-premise infrastructure including SC4S, and FortiAnalyzer. All systems are set to the same GMT+7 time zone. The issue is specific to the local logs from FortiAnalyzer. We have the following add-ons installed: Fortinet FortiGate Add-on for Splunk (version 1.6.9) Fortinet FortiGate App for Splunk (version 1.6.4) The problem only affects a specific type of log from FortiAnalyzer: Logs from other FortiGates: These logs are forwarded to FortiAnalyzer and then to Splunk. They are working correctly, and the log time matches the Splunk event time. Local logs from FortiAnalyzer: This includes events like login, logout, and configuration changes on the FortiAnalyzer itself. For these logs, there is a 7-hour time difference between the log timestamp and the Splunk event time. This time discrepancy causes a significant problem. For example, if we create an alert for a configuration change on FortiAnalyzer, it will be triggered 7 hours late, making real-time monitoring impossible (As shown in this picture, using the same SPL query, searching by Splunk's event time returns results, while searching by the actual timestamp in the logs returns nothing.)  
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it sh... See more...
I think the in most cases there are no real issues with different versions as long as there is no too big cap with versions. And as/if you are using only HEC to sending events from HF->IDX then it shouldn't be issue. But if you are using also s2s then there could be some challenges. And at least MC gives you a warnings if HFs are added there and those are newer than MC itself. If you will need help from Splunk Support then this could be issue as that combination is not officially supported.  Anyhow you should update at least 9.2.x or 9.3. asap. Here is link to support times for Splunk core https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-ente... See more...
Hi @igor5212  Ive generally not found any issues with HFs running a higher version of Splunk compared with the indexers. There is a good compatibility table at https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/compatibility-matrix/splunk-products-version-compatibility/compatibility-between-forwarders-and-splunk-enterprise-indexers which lists the officially supported combinations of HF->IDX versions Which versions are your HF and IDX running?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing