All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHE... See more...
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHEL arm incompatibility installed on the virtual machine. Then I rented a virtual machine from azure and installed Splunk SOAR there. Splunk enterprise is installed on my local network. First, I connected Splunk Enterprise to SOAR by following the instructions in this video (https://www.youtube.com/watch?v=36RjwmJ_Ee4&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=2) and test connectivity gave successful results. Then I tried to connect SOAR to Splunk Enterprise by following the instructions in this video (https://www.youtube.com/watch?v=phxiwtfFsEA&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=3), but I had trouble connecting soar to Splunk because Splunk SOAR and Splunk Enterprise Security are on different networks. In the most common example I came across, SOAR and Splunk Enterprise Security are on the same network, but they are on different networks. What should I write to the host ip here when trying to connect SOAR? What is the solution? Thanks for your help.
can you create searches using the REST API in splunk cloud
| eval previous_time=relative_time(now(),"-".months."mon") You would have to be careful around leap years and if the months is not a multiple of 12. If you know the months is always going to be a m... See more...
| eval previous_time=relative_time(now(),"-".months."mon") You would have to be careful around leap years and if the months is not a multiple of 12. If you know the months is always going to be a multiple of 12, you could do this instead | eval previous_time=relative_time(now(),"-".floor(months/12)."y")
Please clarify what you expect - your example shows policy_3 and policy_4 changing in the last 24 hours by the removal of (X) not the addition, and they don't appear prior to today, so what is it tha... See more...
Please clarify what you expect - your example shows policy_3 and policy_4 changing in the last 24 hours by the removal of (X) not the addition, and they don't appear prior to today, so what is it that you are trying to compare. Similarly, policy_1 and policy_2 do not appear today, although they do appear to have changed by the removal of (X) within the 48 hours prior to today.
@manuelostertagI'm having the same issue. Any luck with this?
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing t... See more...
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing the number of months, i.e.: 24, 36, 48, etc.  What I'm attempting to do is take that number and create a new field which takes today's date and then subtracts the number of months to derive a prior date. For example, if the # of months is 36, then the field would display "08/29/2021" ; essentially the same thing that this is doing:  https://www.timeanddate.com/date/dateadded.html?m1=8&d1=29&y1=2024&type=sub&ay=&am=36&aw=&ad=&rec= I'm not exactly sure where to begin with this one, so any help getting started would be greatly appreciated. Thank you!
Despite the documentation, I've never seen reverse-lexicographic order applied to .conf files.  If you need to override the settings in an app, the best way is to specify the new setting in the same... See more...
Despite the documentation, I've never seen reverse-lexicographic order applied to .conf files.  If you need to override the settings in an app, the best way is to specify the new setting in the same app's /local directory.  If that's not possible, use an app that sorts before the app you want to override. As always, btool is your friend.  It will tell you what settings will apply before you restart Splunk. splunk btool --debug savedsearches list <<search name>>
Rather than try to run the report on the last day of the month, how about running it as soon as the month ends - the first day of the next month? 1 0 1 * * I used minute 1 to avoid getting skipped ... See more...
Rather than try to run the report on the last day of the month, how about running it as soon as the month ends - the first day of the next month? 1 0 1 * * I used minute 1 to avoid getting skipped during the overly-popular minute 0.
This data is not being onboarded properly.  That may be your fault or someone else's, but you need to work with the owner of the HF to install a better set of props.conf settings so the data is onboa... See more...
This data is not being onboarded properly.  That may be your fault or someone else's, but you need to work with the owner of the HF to install a better set of props.conf settings so the data is onboarded correctly. Focus on the Great Eight settings, with particular attention to LINE_BREAKER, TIME_PREFIX, and TIME_FORMAT. If the HF owner pushes back, remind him/her that Splunk suffers when data is not onboarded well.  Additionally, the company may suffer if data cannot be searched because the timestamps are wrong.
You're asking for trouble. While you might try to use subsearch to return a set of criteria for the main search it is a very unreliable way to do it and you're bound to have unexplained wrong search ... See more...
You're asking for trouble. While you might try to use subsearch to return a set of criteria for the main search it is a very unreliable way to do it and you're bound to have unexplained wrong search results especially if searching over larger datasets due to subsearch limitations. Additionallly there are several problems with your searches. Both are highly inefficient due to wildcard use at the beginning of search term. You can't do arithmetics on a string-rendered timestamp. This is not a right format for earliest/latest (to be safe it's best to just use epoch timestamps for those parameters if calculating them from subsearch). Your first search contains several separate search terms instead of - as I presume - a single string. After this overly long introduction - It's probably best done completely differently - for example with streamstats marking subsequent events.
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_numbe... See more...
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_number output=model| eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q")| eval before=keystone_time-10| eval after=_time+10| eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q")| table keystone_time, serial_number, message, model, after| I would like to take the after and serial fields, use these fields to search construct a main  search like search index="june_analytics_logs_prod" serial=$serial_number$ message=*glow_v:* earliest=$keystone_time$ latest=$after$| Each event yielded by the subsearch yields a time when the event occured I want to find events, matching the same serial, with messages containing "glow_v" within 10 seconds after each of the subsearch events  
I will give this a try Thank you @PickleRick 
Putting like events together on the same line is the purpose of step #7 in my original reply.  Doing that, however, requires a field with values common to both the old and new policies.
@richgalloway  Thank you for the response! So lets say we go the path of creating two scheduled Reports 1.  a report on the 16th day of a month How would I setup the 2nd query to search from 17... See more...
@richgalloway  Thank you for the response! So lets say we go the path of creating two scheduled Reports 1.  a report on the 16th day of a month How would I setup the 2nd query to search from 17~ 30  or 31 depending on the month? would it look like this?  
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. ... See more...
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. https://docs.splunk.com/Documentation/Splunk/9.3.0/admin/Savedsearchesconf https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Wheretofindtheconfigurationfiles According to the config file precedence page, the priority of savedsearches is determined by the application/user context, it is a reverse lexicographic order. That is, the configuration from add-on B overrides the configuration from add-on A. I have savesearch defined in addon A (an addon from Splunkbase). There is a missing index call in the SPL. I created app B with savedsearches.conf. I created an identically named "stanza" there and provided a single parameter "search=". In the parameter I put a new SPL query that contains the paricula index call. I was hoping that my new add-in named "B" would override the search query in add-in A, but it didn't. Splunk reports that I have a duplicate configuration. I hope I described this in understandable way. I must be missing something.    
cron lets you schedule a report on the 16th day of a month, but not the 16th and last days.  You would need two reports for that. Even if cron did what you seek, Splunk sends reports immediately.  T... See more...
cron lets you schedule a report on the 16th day of a month, but not the 16th and last days.  You would need two reports for that. Even if cron did what you seek, Splunk sends reports immediately.  There is no way to sit on the results before sending them other than to write the results to a summary index or CSV.  Then you would need two reports - one to search the first half of the month and write the summary; the second searches the rest of the month and incorporates the summary results into the final report.
UF does not do parsing. Except for indexed extractions or when you set force_local_processing=true. So unless you turn your UF into a kind of a poor-man's-HF, your parsing and time extraction setting... See more...
UF does not do parsing. Except for indexed extractions or when you set force_local_processing=true. So unless you turn your UF into a kind of a poor-man's-HF, your parsing and time extraction settings will not work on UF. If you have access to HEC endpoint though you could consider using another method (like third-party solution like filebeat or even your own python script to pre-parse those events a bit and send them via HTTP.
It's not clear how the health field is calculated. One way is what @ITWhisperer showed but it won't match your mockup results - you have health=bad all acros the board.
1. Other than the fact that you're holding the events in Splunk the question as such is completely unrelated to Splunk. It's a question about ObserveIT. 2. There is no general one-size-fits-all answ... See more...
1. Other than the fact that you're holding the events in Splunk the question as such is completely unrelated to Splunk. It's a question about ObserveIT. 2. There is no general one-size-fits-all answer. Different organizations have different sensitivity to those things
This worked , Thank you so much @dural_yyz24