All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@tungpx- Your answers below: 1. Yes, in your situation because I think as you mentioned hot is already full. 2. Your understanding is mostly correct I think. 3. Solution: The solution for you is... See more...
@tungpx- Your answers below: 1. Yes, in your situation because I think as you mentioned hot is already full. 2. Your understanding is mostly correct I think. 3. Solution: The solution for you is to keep some of the hot bucket always empty. Now there are two ways to do it: 1. Use Volume for homePath & coldPath. (And allocate less storage for homePath than what is actual available) I have personally not tried this approach. So please read the Volumes documentation in indexes.conf.spec first. 2. Lower the maxWarmDBCount (this will move data early to coldPath, before its full) This is what I use in my environments currently and it works for me!!!   This will improve performance of IOPS, which I think should eventually free the queues.   I hope this helps!!! Kindly upvote !!!!!
@ND1- I think this doc should help. https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/InstallUnprivileged   For Tip, keep in mind SOAR only supports limited Operating Systems - https:... See more...
@ND1- I think this doc should help. https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/InstallUnprivileged   For Tip, keep in mind SOAR only supports limited Operating Systems - https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Install/Requirements ( It took huge amount of time me installing on Ubuntu and then realised its not supported.)   I hope this helps!!! Please upvote !!!
@bennch68- You can use something like this: index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" ("connection_down_error raised" OR "-INFOS- {3} QM reconnected") | sort... See more...
@bennch68- You can use something like this: index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" ("connection_down_error raised" OR "-INFOS- {3} QM reconnected") | sort 0 - _time | eval event_type=if(match(_raw, "connection_down_error raised"), "disconnect", "connect") | stats list(event_type) as event_type, list(_time) as all_timestamp values(eval(if(event_type="disconnect", _time, NULL()))) as time_of_disconnect | search event_type="disconnect" ``` search part below may need to be reviewed and fixed, you can remove that part, see the results and update the query accordingly step-by-step ``` | eval index_of_disconnect=mvfind(event_type, "^disconnect$") | eval prev_disconnect=mvindex(event_type,index_of_disconnect-1) | eval next_disconnect=mvindex(event_type,index_of_disconnect+1) | eval timestamp_disconnect=mvindex(all_timestamp, index_of_disconnect) | eval prev_timestamp=mvindex(all_timestamp,index_of_disconnect-1) | eval next_timestamp=mvindex(all_timestamp,index_of_disconnect+1) \ | eval prev_timediff=prev_timestamp-timestamp_disconnect, next_timediff=timestamp_disconnect-next_timestamp \ | search NOT (next_event_type="connect" next_timediff<500) | where timestamp_disconnect<=relative_time(now(),"-2m@m") AND timestamp_disconnect>=relative_time(now(),"-7m@m")   FYI, The search that you are trying to build isn't very simplest of search, so it may take some time to understand and learn. But it should give you many leanings along the way.   I hope this is helpful. Kindly upvote!!!
@wealot- There is no clear document that we can do write: [], so I would suggest to test following. Not sure if this is best solution, but maybe this will work. Create a role called role_for_no_one... See more...
@wealot- There is no clear document that we can do write: [], so I would suggest to test following. Not sure if this is best solution, but maybe this will work. Create a role called role_for_no_one and do not assign this role to anyone. Do not import this role from any other role. Metadata access = read: [*], write: [role_for_no_one]   I hope this helps!!!
@anglewwb35- Just FYI, Splunk include MongoDB within its installation to run KVstore service for lookups. Now I don't recommend to make any specific changes, except if maybe block kvstore port from ... See more...
@anglewwb35- Just FYI, Splunk include MongoDB within its installation to run KVstore service for lookups. Now I don't recommend to make any specific changes, except if maybe block kvstore port from outside the local machine via local firewall or cloud firewall. (be careful in blocking port when using SH Cluster.) * Default KVstore Port - 8191   I hope this helps!!! Kindly upvote!!!
@Richy_s- There is currently no option for it I think. But you can suggest Splunk team to include it for future release of Splunk at https://ideas.splunk.com/   I hope this helps!!! Kindly upvote ... See more...
@Richy_s- There is currently no option for it I think. But you can suggest Splunk team to include it for future release of Splunk at https://ideas.splunk.com/   I hope this helps!!! Kindly upvote if it does!!!
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation p... See more...
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation process and later update its value via the API as the ticket progresses through different stages in Jira. Here are my key questions: What is the best way to add a custom field like jira_ticket to notable events? Are there specific configurations or updates needed in correlation searches or incident review settings? How can I reliably update this field through the API after it has been created? Are there any specific endpoints or parameters I need to be aware of? Since I am using an older Splunk version (7.3.2), are there any limitations or additional considerations I should keep in mind while implementing this? If anyone has successfully implemented a similar setup or can point me toward documentation, examples, or best practices, I’d greatly appreciate your input. Thank you in advance!  
@ramuzzini- I'm not sure if it is feasible with Dashboard Studio. But it is possible to do with regular Splunk Simple XML (Classic) dashboard. In Simple XML dashboard you can use <change> effect and... See more...
@ramuzzini- I'm not sure if it is feasible with Dashboard Studio. But it is possible to do with regular Splunk Simple XML (Classic) dashboard. In Simple XML dashboard you can use <change> effect and use $value$ for username & $label$ to get full username. Here is the reference which is similar to what you are looking for but in Simple XML dashboard - https://community.splunk.com/t5/Dashboards-Visualizations/How-to-set-two-tokens-off-one-dropdown-in-dashboard/m-p/408734 (Please take some time to review this and learn, I know if you don't have much experience with Simple XML Dashboard then it may take some time to understand it.)   I hope this helps!!! Kindly upvote if it does!!!
@pipehitter- As it seems you are using Splunk's default Export functionality. If that has any bug you can contact Splunk support or create a case with Splunk.   Though there are some third-party Ap... See more...
@pipehitter- As it seems you are using Splunk's default Export functionality. If that has any bug you can contact Splunk support or create a case with Splunk.   Though there are some third-party Apps for exports which you can try, but I would say check with Splunk support anyways to figure-out the issue.   I hope this helps!!!
@All-done-steak- Please check indexes.conf in the Splunk backend. There seems to be some issue with some config written in indexes.conf.   You can find indexes.conf files with find command in Linux... See more...
@All-done-steak- Please check indexes.conf in the Splunk backend. There seems to be some issue with some config written in indexes.conf.   You can find indexes.conf files with find command in Linux: find /opt/splunk -name "indexes.conf"   I hope this helps!!!
Which macOS and splunk versions you have? And Intel or Mx series workstation you have? I have never seen this, but neither browse those directories with Finder, I just use cli.
Of course packaging this setting into an app is the easiest approach regardless of how you're gonna end up pushing that app to the UFs. And since you have no DS you're left with either manually copyi... See more...
Of course packaging this setting into an app is the easiest approach regardless of how you're gonna end up pushing that app to the UFs. And since you have no DS you're left with either manually copying the app to each computer and restarting the UF process or you can go whatever configuration management tool you already have (if any). Since we're talking Windows the most probable choice would he SCCM.
.DS_Store is a hidden file that macOS automatically creates in directories to store custom attributes and metadata about a folder. These files are specific to macOS and are generally not needed for a... See more...
.DS_Store is a hidden file that macOS automatically creates in directories to store custom attributes and metadata about a folder. These files are specific to macOS and are generally not needed for application functionality. In the Splunk application directory (/Applications/Splunk/etc/users), these .DS_Store files were likely created by Finder when browsing these directories. Thanks @nlloyd, deleting that file works!
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_fil... See more...
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_file path. Why were the files moved? Was the files in the splunk/bin/scripts path only moved?
A thing of beauty marycordova - thank you!
Thanks for the reply @bowesmana  Yes, I would like to ignore special characters also if possible. Your regex will work if the requirement is to ignore the numeric digits in alphanumaric words but m... See more...
Thanks for the reply @bowesmana  Yes, I would like to ignore special characters also if possible. Your regex will work if the requirement is to ignore the numeric digits in alphanumaric words but my requirement is to completely ignore the words  that have numeric digits.
@ITWhisperer Thanks for sharing the regex. It is working for some of the examples but not for all. I think this is because I have not clearly explained the requirement. My requirement is to capture a... See more...
@ITWhisperer Thanks for sharing the regex. It is working for some of the examples but not for all. I think this is because I have not clearly explained the requirement. My requirement is to capture all the words that have letters only and completely ignore(reject) alphanumeric/numeric words & special characters. Also, I would like to extract full text , not limited to 12 words. Could you please share the regex and explanation also if possible? Sharing couple of examples where  regex is not working: 1)Exception message - CSR-a4cd725c-3d73-426c-b254-5e4f4adc4b26 - Generating exception because of multiple stage failure - abc_ELIGIBILITY" Output with regex - "Exception message - CSR" and for some other records it is coming as "Exception message - CSR-a4cd725c" Required Output - Exception Message CSR Generating exception because of multiple stage failure abc ELIGIBILITY 2)0013c5fb1737577541466 - Exception message - 0013c5fb1737577541466 - Generating exception because of multiple stage failure - abc_ELIGIBILITY Output - Exception message Required Output - Exception message Generating exception because of multiple stage failure abc_ELIGIBILITY 3) b187c4411737535464656 - Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response creation couldn't happen for all the placements. Creating error response. Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response Required Output - Exception message Exception in abc module. Creating error response Response creation couldn't happen for all the placements. Creating error response.   
Hi Splunk community! I just want to let you know that I worked with one of our most senior engineers to update information in Splunk docs about NaN -- take a look at the info about NaN in the isnum ... See more...
Hi Splunk community! I just want to let you know that I worked with one of our most senior engineers to update information in Splunk docs about NaN -- take a look at the info about NaN in the isnum and isstr sections in Informational functions in the Splunk platform Search Reference.   I know that playing around with NaN is irresistible, especially for our techiest Splunk experts, but the general advice from the sr engineer is to avoid using NaN in Splunk searches if possible unless you really really know what you're doing.   --Kristina
I totally agree this, but when you must keep business up and running and there is this things we called physic, we haven’t any other option than wait. Here is old story about it https://web.mit.edu/je... See more...
I totally agree this, but when you must keep business up and running and there is this things we called physic, we haven’t any other option than wait. Here is old story about it https://web.mit.edu/jemorris/humor/500-miles
Oooooh, I gotcha. Thank you for the info! If I don't have a deployment server for the UFs, how would I go about updating their configs to drop the event codes I don't want coming into the index?