All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nmohammed  and @goelshruti119 , Please see the following reply for instructions on how to troubleshoot: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highl... See more...
Hi @nmohammed  and @goelshruti119 , Please see the following reply for instructions on how to troubleshoot: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highlight/true#... Cheers,    - Jo.  
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to p... See more...
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to point out that this version of OpenSSL may not yet be EOS for Splunk due to a purchase of an extended support contract; however, I have been unsuccessful in finding a documentation to support this. Please help provide this information or suggest how this finding can be addressed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/libcrypto.so Installed version : 1.1.1k Security End of Life : September 11, 2023 Time since Security End of Life (Est.) : >= 6 months  Thank you.
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/e... See more...
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/etc/system/local/inputs.conf with the following contents. ( the reason I do this is to make sure the host field for forwarded internal logs doesn't contain the FQDN like hostname in server.conf [default] host = <name of this host> When I get to the step where I run: splunk add cluster-member -current_member_uri https://current_member_name:8089 It works, but /opt/splunk/etc/system/local/inputs.conf is replicated from the current_member_name And, if I run something like: splunk set default-hostname <name of this host> ... it modifies inputs.conf on EVERY node of the cluster. Diving into this I believe this is happening because of the Domain Add-On DA-ESS-ThreatIntelligence which contains a server.conf file in it's default directory. (why this would be, I've no idea) contents of /opt/splunk/etc/shcluster/apps/DA-ESS-ThreatIntelligence/default/server.conf on our Cluster Deployer - which is now delivered to all cluster members. [shclustering] conf_replication_include.inputs = true It seems to me that it's this stanza that is causing the issue. Am I on the right track? And why would DA-ESS-ThreatIntelligence be delivered with this particular config? Thank you.
Actually I did what you said. I asked this question to the community to make sure I was doing it right, maybe I was missing something. SOAR is installed on centos 8.5 operating system. I couldn't ins... See more...
Actually I did what you said. I asked this question to the community to make sure I was doing it right, maybe I was missing something. SOAR is installed on centos 8.5 operating system. I couldn't install openvpn on this OS. I rented another virtual machine and installed openvpn on it. VPN machine and SOAR were on different networks again, I peered them over azure. CentOS 8.5 machine and openvpn machine were on the same network. When I connect to VPN from my computer, I can ping the centOS private IP address from my computer and get a response, there is no problem here. But Splunk SOAR still refuses to connect
It's not just "because they are in different networks" but because your internal network is organized the way it is. You might try to set up a VPN to your Azure environment to allow for connectivity ... See more...
It's not just "because they are in different networks" but because your internal network is organized the way it is. You might try to set up a VPN to your Azure environment to allow for connectivity or try to do some DNATs in your home network but as you're asking a kind of very basic network-related questions, you'd better not do that without fully understanding the risks.
@richgalloway  yeah the only reason I'm splitting it into two sections is because when I did the logs for 1 month. the exported excel sheet was missing data for some reason. but when I split it in ... See more...
@richgalloway  yeah the only reason I'm splitting it into two sections is because when I did the logs for 1 month. the exported excel sheet was missing data for some reason. but when I split it in half in the search query the data is able to populate. I guess the search query might have been too messy, much and adding on 1 month for it might have caused it to use to much resource or something. Thank you
Not searching in fast mode. I am going to assume that I did not installed it in all the required places, I inherited this from another employee. I have it deployed from the DS to my endpoints and th... See more...
Not searching in fast mode. I am going to assume that I did not installed it in all the required places, I inherited this from another employee. I have it deployed from the DS to my endpoints and the local conf are configured there. I have it also installed via Manage Apps in the Cloud search head.   
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHE... See more...
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHEL arm incompatibility installed on the virtual machine. Then I rented a virtual machine from azure and installed Splunk SOAR there. Splunk enterprise is installed on my local network. First, I connected Splunk Enterprise to SOAR by following the instructions in this video (https://www.youtube.com/watch?v=36RjwmJ_Ee4&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=2) and test connectivity gave successful results. Then I tried to connect SOAR to Splunk Enterprise by following the instructions in this video (https://www.youtube.com/watch?v=phxiwtfFsEA&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=3), but I had trouble connecting soar to Splunk because Splunk SOAR and Splunk Enterprise Security are on different networks. In the most common example I came across, SOAR and Splunk Enterprise Security are on the same network, but they are on different networks. What should I write to the host ip here when trying to connect SOAR? What is the solution? Thanks for your help.
can you create searches using the REST API in splunk cloud
| eval previous_time=relative_time(now(),"-".months."mon") You would have to be careful around leap years and if the months is not a multiple of 12. If you know the months is always going to be a m... See more...
| eval previous_time=relative_time(now(),"-".months."mon") You would have to be careful around leap years and if the months is not a multiple of 12. If you know the months is always going to be a multiple of 12, you could do this instead | eval previous_time=relative_time(now(),"-".floor(months/12)."y")
Please clarify what you expect - your example shows policy_3 and policy_4 changing in the last 24 hours by the removal of (X) not the addition, and they don't appear prior to today, so what is it tha... See more...
Please clarify what you expect - your example shows policy_3 and policy_4 changing in the last 24 hours by the removal of (X) not the addition, and they don't appear prior to today, so what is it that you are trying to compare. Similarly, policy_1 and policy_2 do not appear today, although they do appear to have changed by the removal of (X) within the 48 hours prior to today.
@manuelostertagI'm having the same issue. Any luck with this?
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing t... See more...
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing the number of months, i.e.: 24, 36, 48, etc.  What I'm attempting to do is take that number and create a new field which takes today's date and then subtracts the number of months to derive a prior date. For example, if the # of months is 36, then the field would display "08/29/2021" ; essentially the same thing that this is doing:  https://www.timeanddate.com/date/dateadded.html?m1=8&d1=29&y1=2024&type=sub&ay=&am=36&aw=&ad=&rec= I'm not exactly sure where to begin with this one, so any help getting started would be greatly appreciated. Thank you!
Despite the documentation, I've never seen reverse-lexicographic order applied to .conf files.  If you need to override the settings in an app, the best way is to specify the new setting in the same... See more...
Despite the documentation, I've never seen reverse-lexicographic order applied to .conf files.  If you need to override the settings in an app, the best way is to specify the new setting in the same app's /local directory.  If that's not possible, use an app that sorts before the app you want to override. As always, btool is your friend.  It will tell you what settings will apply before you restart Splunk. splunk btool --debug savedsearches list <<search name>>
Rather than try to run the report on the last day of the month, how about running it as soon as the month ends - the first day of the next month? 1 0 1 * * I used minute 1 to avoid getting skipped ... See more...
Rather than try to run the report on the last day of the month, how about running it as soon as the month ends - the first day of the next month? 1 0 1 * * I used minute 1 to avoid getting skipped during the overly-popular minute 0.
This data is not being onboarded properly.  That may be your fault or someone else's, but you need to work with the owner of the HF to install a better set of props.conf settings so the data is onboa... See more...
This data is not being onboarded properly.  That may be your fault or someone else's, but you need to work with the owner of the HF to install a better set of props.conf settings so the data is onboarded correctly. Focus on the Great Eight settings, with particular attention to LINE_BREAKER, TIME_PREFIX, and TIME_FORMAT. If the HF owner pushes back, remind him/her that Splunk suffers when data is not onboarded well.  Additionally, the company may suffer if data cannot be searched because the timestamps are wrong.
You're asking for trouble. While you might try to use subsearch to return a set of criteria for the main search it is a very unreliable way to do it and you're bound to have unexplained wrong search ... See more...
You're asking for trouble. While you might try to use subsearch to return a set of criteria for the main search it is a very unreliable way to do it and you're bound to have unexplained wrong search results especially if searching over larger datasets due to subsearch limitations. Additionallly there are several problems with your searches. Both are highly inefficient due to wildcard use at the beginning of search term. You can't do arithmetics on a string-rendered timestamp. This is not a right format for earliest/latest (to be safe it's best to just use epoch timestamps for those parameters if calculating them from subsearch). Your first search contains several separate search terms instead of - as I presume - a single string. After this overly long introduction - It's probably best done completely differently - for example with streamstats marking subsequent events.
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_numbe... See more...
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_number output=model| eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q")| eval before=keystone_time-10| eval after=_time+10| eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q")| table keystone_time, serial_number, message, model, after| I would like to take the after and serial fields, use these fields to search construct a main  search like search index="june_analytics_logs_prod" serial=$serial_number$ message=*glow_v:* earliest=$keystone_time$ latest=$after$| Each event yielded by the subsearch yields a time when the event occured I want to find events, matching the same serial, with messages containing "glow_v" within 10 seconds after each of the subsearch events  
I will give this a try Thank you @PickleRick 
Putting like events together on the same line is the purpose of step #7 in my original reply.  Doing that, however, requires a field with values common to both the old and new policies.