All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rahulkumar , logstash is a log concentrator, so, probably, from logstash youre receiving logs of different sourcetypes (e.g. linux, firewall, routers, switches, etc...). After extracting metada... See more...
Hi @rahulkumar , logstash is a log concentrator, so, probably, from logstash youre receiving logs of different sourcetypes (e.g. linux, firewall, routers, switches, etc...). After extracting metadata, you have to recover the raw event and assign to each kind of log the sourcetype to use in the related add-ons, e.g. linux logs must be assigned to the sourcetype linux_secure, linux_audit, and so on. These sourcetypes are the ones from the add-on Splunk Add-on for Linux and Unix that you can download from Splunkbase. Ciao. Giuseppe
Thank you for the link(s). Would be great if Splunk had included this important bit of information in their docs...
Hi @greenpebble , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has recei... See more...
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has received updates on the following dates: 19 Nov 2024 20 Nov 2024 26 Nov 2024 2 Dec 2024 3 Dec 2024 10 Dec 2024 17 Dec 2024 21 Jan 2025 However, when I search in Splunk within the same timeframe (1 Nov 2024 to 23 Jan 2025), the logs only show updates on: 3 Dec 2024 10 Dec 2024 17 Dec 2024 I aim to establish a rule that triggers a notification if there has been no update for 20 days or more. Regrettably, despite the Sophos console indicating recent updates, the discrepancies in Splunk raise concerns about accurate monitoring. I have verified the settings under Indexing > Indexes and Volumes in Splunk, and everything appears to be configured correctly. Could anyone provide insights on how to track and resolve this discrepancy? Thank you for your assistance.
Since there is no such setting for indexes.conf there are two possible reasons. 1. Less likely - you have this setting set somewhere. Look for it with either find | grep or with splunk btool and rem... See more...
Since there is no such setting for indexes.conf there are two possible reasons. 1. Less likely - you have this setting set somewhere. Look for it with either find | grep or with splunk btool and remove 2. More likely - you hit some frontend issue and coldPath_expanded is a variable existing only on your browser's side for some strange reason. In such case it's probably a support case material.
1. As @VatsalJagani already pointed out - mongodb is an integral part of Splunk distribution and Splunk relies on it to work properly. Therefore changing its configuration is not recommended and you'... See more...
1. As @VatsalJagani already pointed out - mongodb is an integral part of Splunk distribution and Splunk relies on it to work properly. Therefore changing its configuration is not recommended and you're very likely to cause problems if you're changing things without deep understanding of their impact for the whole environment. 2. Baseline checks, vulnerability scans and such are just tools to help you assess the state of the system, not do the job for you. They alone are not sufficient grounds for telling you what is OK and what is not. Running them blindly and following their "recommendations" without understanding the results of performed tests and their context is not a good practice.
@tungpx- Your answers below: 1. Yes, in your situation because I think as you mentioned hot is already full. 2. Your understanding is mostly correct I think. 3. Solution: The solution for you is... See more...
@tungpx- Your answers below: 1. Yes, in your situation because I think as you mentioned hot is already full. 2. Your understanding is mostly correct I think. 3. Solution: The solution for you is to keep some of the hot bucket always empty. Now there are two ways to do it: 1. Use Volume for homePath & coldPath. (And allocate less storage for homePath than what is actual available) I have personally not tried this approach. So please read the Volumes documentation in indexes.conf.spec first. 2. Lower the maxWarmDBCount (this will move data early to coldPath, before its full) This is what I use in my environments currently and it works for me!!!   This will improve performance of IOPS, which I think should eventually free the queues.   I hope this helps!!! Kindly upvote !!!!!
@ND1- I think this doc should help. https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/InstallUnprivileged   For Tip, keep in mind SOAR only supports limited Operating Systems - https:... See more...
@ND1- I think this doc should help. https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/InstallUnprivileged   For Tip, keep in mind SOAR only supports limited Operating Systems - https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/Requirements ( It took huge amount of time me installing on Ubuntu and then realised its not supported.)   I hope this helps!!! Please upvote !!!
@bennch68- You can use something like this: index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" ("connection_down_error raised" OR "-INFOS- {3} QM reconnected") | sort... See more...
@bennch68- You can use something like this: index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" ("connection_down_error raised" OR "-INFOS- {3} QM reconnected") | sort 0 - _time | eval event_type=if(match(_raw, "connection_down_error raised"), "disconnect", "connect") | stats list(event_type) as event_type, list(_time) as all_timestamp values(eval(if(event_type="disconnect", _time, NULL()))) as time_of_disconnect | search event_type="disconnect" ``` search part below may need to be reviewed and fixed, you can remove that part, see the results and update the query accordingly step-by-step ``` | eval index_of_disconnect=mvfind(event_type, "^disconnect$") | eval prev_disconnect=mvindex(event_type,index_of_disconnect-1) | eval next_disconnect=mvindex(event_type,index_of_disconnect+1) | eval timestamp_disconnect=mvindex(all_timestamp, index_of_disconnect) | eval prev_timestamp=mvindex(all_timestamp,index_of_disconnect-1) | eval next_timestamp=mvindex(all_timestamp,index_of_disconnect+1) \ | eval prev_timediff=prev_timestamp-timestamp_disconnect, next_timediff=timestamp_disconnect-next_timestamp \ | search NOT (next_event_type="connect" next_timediff<500) | where timestamp_disconnect<=relative_time(now(),"-2m@m") AND timestamp_disconnect>=relative_time(now(),"-7m@m")   FYI, The search that you are trying to build isn't very simplest of search, so it may take some time to understand and learn. But it should give you many leanings along the way.   I hope this is helpful. Kindly upvote!!!
@wealot- There is no clear document that we can do write: [], so I would suggest to test following. Not sure if this is best solution, but maybe this will work. Create a role called role_for_no_one... See more...
@wealot- There is no clear document that we can do write: [], so I would suggest to test following. Not sure if this is best solution, but maybe this will work. Create a role called role_for_no_one and do not assign this role to anyone. Do not import this role from any other role. Metadata access = read: [*], write: [role_for_no_one]   I hope this helps!!!
@anglewwb35- Just FYI, Splunk include MongoDB within its installation to run KVstore service for lookups. Now I don't recommend to make any specific changes, except if maybe block kvstore port from ... See more...
@anglewwb35- Just FYI, Splunk include MongoDB within its installation to run KVstore service for lookups. Now I don't recommend to make any specific changes, except if maybe block kvstore port from outside the local machine via local firewall or cloud firewall. (be careful in blocking port when using SH Cluster.) * Default KVstore Port - 8191   I hope this helps!!! Kindly upvote!!!
@Richy_s- There is currently no option for it I think. But you can suggest Splunk team to include it for future release of Splunk at https://ideas.splunk.com/   I hope this helps!!! Kindly upvote ... See more...
@Richy_s- There is currently no option for it I think. But you can suggest Splunk team to include it for future release of Splunk at https://ideas.splunk.com/   I hope this helps!!! Kindly upvote if it does!!!
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation p... See more...
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation process and later update its value via the API as the ticket progresses through different stages in Jira. Here are my key questions: What is the best way to add a custom field like jira_ticket to notable events? Are there specific configurations or updates needed in correlation searches or incident review settings? How can I reliably update this field through the API after it has been created? Are there any specific endpoints or parameters I need to be aware of? Since I am using an older Splunk version (7.3.2), are there any limitations or additional considerations I should keep in mind while implementing this? If anyone has successfully implemented a similar setup or can point me toward documentation, examples, or best practices, I’d greatly appreciate your input. Thank you in advance!  
@ramuzzini- I'm not sure if it is feasible with Dashboard Studio. But it is possible to do with regular Splunk Simple XML (Classic) dashboard. In Simple XML dashboard you can use <change> effect and... See more...
@ramuzzini- I'm not sure if it is feasible with Dashboard Studio. But it is possible to do with regular Splunk Simple XML (Classic) dashboard. In Simple XML dashboard you can use <change> effect and use $value$ for username & $label$ to get full username. Here is the reference which is similar to what you are looking for but in Simple XML dashboard - https://community.splunk.com/t5/Dashboards-Visualizations/How-to-set-two-tokens-off-one-dropdown-in-dashboard/m-p/408734 (Please take some time to review this and learn, I know if you don't have much experience with Simple XML Dashboard then it may take some time to understand it.)   I hope this helps!!! Kindly upvote if it does!!!
@pipehitter- As it seems you are using Splunk's default Export functionality. If that has any bug you can contact Splunk support or create a case with Splunk.   Though there are some third-party Ap... See more...
@pipehitter- As it seems you are using Splunk's default Export functionality. If that has any bug you can contact Splunk support or create a case with Splunk.   Though there are some third-party Apps for exports which you can try, but I would say check with Splunk support anyways to figure-out the issue.   I hope this helps!!!
@All-done-steak- Please check indexes.conf in the Splunk backend. There seems to be some issue with some config written in indexes.conf.   You can find indexes.conf files with find command in Linux... See more...
@All-done-steak- Please check indexes.conf in the Splunk backend. There seems to be some issue with some config written in indexes.conf.   You can find indexes.conf files with find command in Linux: find /opt/splunk -name "indexes.conf"   I hope this helps!!!
Which macOS and splunk versions you have? And Intel or Mx series workstation you have? I have never seen this, but neither browse those directories with Finder, I just use cli.
Of course packaging this setting into an app is the easiest approach regardless of how you're gonna end up pushing that app to the UFs. And since you have no DS you're left with either manually copyi... See more...
Of course packaging this setting into an app is the easiest approach regardless of how you're gonna end up pushing that app to the UFs. And since you have no DS you're left with either manually copying the app to each computer and restarting the UF process or you can go whatever configuration management tool you already have (if any). Since we're talking Windows the most probable choice would he SCCM.
.DS_Store is a hidden file that macOS automatically creates in directories to store custom attributes and metadata about a folder. These files are specific to macOS and are generally not needed for a... See more...
.DS_Store is a hidden file that macOS automatically creates in directories to store custom attributes and metadata about a folder. These files are specific to macOS and are generally not needed for application functionality. In the Splunk application directory (/Applications/Splunk/etc/users), these .DS_Store files were likely created by Finder when browsing these directories. Thanks @nlloyd, deleting that file works!
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_fil... See more...
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_file path. Why were the files moved? Was the files in the splunk/bin/scripts path only moved?