All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a standard printed statement that shows something like this: [29/Aug/2024:23:59:48 +0000] "GET /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "POST /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "PUT... See more...
I have a standard printed statement that shows something like this: [29/Aug/2024:23:59:48 +0000] "GET /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "POST /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "PUT /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "DELETE /rest/LMNOP I don't have a defined field called  "ActionTaken" in the sense, was the user doing a put, post or get etc.. Is there a simple regex that would give me something to add to a query that would define a variable called  "ActionTaken" tried this: rex "\//rest/s*(?<ActionTaken>\d{3})"  But it comes back with nothing 
I  am trying to use a lookup of "known good" filenames that are within FTP transfer logs, to add extra data to files that are found in the logs, but also need to show  when files are not found in the... See more...
I  am trying to use a lookup of "known good" filenames that are within FTP transfer logs, to add extra data to files that are found in the logs, but also need to show  when files are not found in the logs, but expected. The lookup has a lookup definition defined, so that FileName can contain wildcards, and this works for matching the wildcarded filename to  existing events, with other SPL. lookup definition with wildcard on FileName for csv: FTP-Out FileName Type Direction weekday File1.txt fixedfilename Out monday File73*.txt variablefilename Out thursday File95*.txt variablefilename Out friday   example events: 8/30/24 9:30:14.000AM FTPFileName=File1.txt Status=Success Size=14kb 8/30/24 9:35:26.000AM FTPFileName=File73AABBCC.txt Status=Success Size=15kb 8/30/24 9:40:11.000AM FTPFileName=File73XXYYZZ.txt Status=Success Size=23kb 8/30/24 9:45:24.000AM FTPFileName=garbage.txt Status=Success Size=1kb current search (simplified): | inputlookup FTP-Out | join type=left FileName [ search index=ftp_logs sourcetype=log:ftp | rename FTPFileName as FileName] results I get: 8/30/24 9:30:14.000AM File1.txt fixedfilename Out monday Success 14kb File73*.txt variablefilename Out thursday File95*.txt variablefilename Out friday desired output: 8/30/24 9:30:14.000AM File1.txt fixedfilename Out monday Success 14kb 8/30/24 9:35:26.000AM File73AABBCC.txt variablefilename Out thursday Success 15kb 8/30/24 9:40:11.000AM File73XXYYZZ.txt variablefilename Out thursday Success 23kb File95*.txt variablefilename Out friday Essentially I want the full filename and results for anything the wildcard in the lookup matches, but also show any time the wildcard filename in the lookup doesn't match an event in the  search window. I've tried various other queries with append/appendcols and transaction and the closest I've gotten  so far is still with the left join, however that doesn't appear to join with wildcarded  fields from a lookup. It also doesn't seem that the where  clause with a join off a lookup  supports like() I'm hoping that someone else might have an idea on how I can get the  matched files as well as missing files in  an  output similar to my desired output above. This is within a splunkcloud deployment not  enterprise.
On a Dashboard Studio dashboard I have a dropdown input and a rectangle that can be clicked. When the rectangle is clicked, the token value of the dropdown input token should be changed to a specifi... See more...
On a Dashboard Studio dashboard I have a dropdown input and a rectangle that can be clicked. When the rectangle is clicked, the token value of the dropdown input token should be changed to a specified value. Is that possible in Dashboard Studio?
Is there an option in Dashboard Studio to set/reset a token that was previously set by a "Click Event" to a new value when a specific search in the Dashboard has finished running?   Just to clarify... See more...
Is there an option in Dashboard Studio to set/reset a token that was previously set by a "Click Event" to a new value when a specific search in the Dashboard has finished running?   Just to clarify: I know that I can access tokens from the search with $search name:result.<field>$ or other tokens like $search name:job.done$. What I would need is to set a token when a search is done    Example: Token "tok_example" has the default value 0 With the click on a button (click event) in the dashboard the token "tok_example" is set to value 1 This (the value 1 of the token "tok_example") triggers a search in the dashboard to run After the search is finished, I want to set the token "tok_example" back to it's original value 0 (without any additional interaction by the user with the dashboard)   Step 4 is the part I don't know how to do in Dashboard Studio. Is there a solution for that?
Configuration page failed to load,  Something went wrong! Unable to xml-parse the following data: %s I have installed the updated Splunk Add on for Microsoft cloud services on Splunk Ente... See more...
Configuration page failed to load,  Something went wrong! Unable to xml-parse the following data: %s I have installed the updated Splunk Add on for Microsoft cloud services on Splunk Enterprise Free trails but getting this error while configuration    Your response will help to resolve this issue
I am working Service now logs in Splunk. The tickets data has one field called "sys_created" this field gives the ticket created time in "%Y-%m-%d %H:%M:%S" format. when I am running the query for t... See more...
I am working Service now logs in Splunk. The tickets data has one field called "sys_created" this field gives the ticket created time in "%Y-%m-%d %H:%M:%S" format. when I am running the query for the last 7 days. The tickets which were raised before 7 days are also populating because of another field called sys_updated. This sys_updated field will store all the updates in the tickets, so if an old ticket is updated within last 7 days, it will be populated when i keep timerange picker as last 7 days. Is there a way to consider "sys_created"  as "_time" ?
Hi, I am testing the Security Essentials App 3.8.0 in Splunk 9.0.8, and I found the same issue while trying to activate the following contents: Unknown Process Using The Kerberos Protocol Windows... See more...
Hi, I am testing the Security Essentials App 3.8.0 in Splunk 9.0.8, and I found the same issue while trying to activate the following contents: Unknown Process Using The Kerberos Protocol Windows Steal or Forge Kerberos Tickets Klist ServicePrincipalNames Discovery with SetSPN Rubeus Command Line Parameters Mimikatz PassTheTicket CommandLine Parameters In all cases above, I get two errors:  "Must have data in data model Endpoint.Processes" is in red even though I have installed several Add-ons suggested as compatible such as Splunk Add-on for Microsoft Windows 8.9.0 Palo Alto Networks Add-on for Splunk 8.1.1 Error in 'SearchParser': The search specifies a macro 'summariesonly_config' that cannot be found.  I searched that missing macro and indeed it does not exist. Should I create it manually? With which value? Do you have any idea how to fix those two errors? Many thanks
Due to Office 365 connectors in Microsoft Teams will be retired. Have anyone success to transit from Office 365 connectors to Workflows in the splunk enterprise solution? Could anyone give me some ... See more...
Due to Office 365 connectors in Microsoft Teams will be retired. Have anyone success to transit from Office 365 connectors to Workflows in the splunk enterprise solution? Could anyone give me some document to do this or the workflow template that work with the splunk enterprise solution?
Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see... See more...
Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see the dashboard working fine  2 - If i select time range last 20 days i can the dashboard is not working 3 - Started trouble shooting the issue and found the below  Spl query The below works fine when the time range is last 30 days  working - index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" Note- The same spl query dont work when time range is last 20 days.  Trouble shooting - Splunk receiving data till date however i have notice few thing,  When i select last 30 days i can see the by fields in the search  UPS Name , UPS Model , Runtime Remaining , Source When i select last 20 days the below fields are missing not sure why?  Missing fields - UPS Name , UPS Model , Runtime Remaining , Source . So the below SPL query is not showing any data  index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" -  |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" The highlighted part not pulling any data due to missing field.   Thanks 
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloa... See more...
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloalto firewall is in US/Pacific time Zone.  how can I check the Splunk timezone. and how can i configure it same on both the side.  #splunktimeZone
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem ... See more...
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem to get it to analyze. Can anyone help me?
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss addi... See more...
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss adding a license, but ingestion does not stop
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added... See more...
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added port 997 under configure receiving - Added Index, named it Sysmonlog.  VM2: Act as client: Windows 10 IP: 192.168.0.34 - Installed Sysmon - installed Splunk Forwarder - set the developer ip:192.168.0.34 port 8089 - set indexer 192.168.0.33 port 9997. ping result is successful form both VMs When I am about to add the forwarder in my indexer nothing shows up. how should I troubleshoot this to be able to add the forwarder?
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to p... See more...
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to point out that this version of OpenSSL may not yet be EOS for Splunk due to a purchase of an extended support contract; however, I have been unsuccessful in finding a documentation to support this. Please help provide this information or suggest how this finding can be addressed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/libcrypto.so Installed version : 1.1.1k Security End of Life : September 11, 2023 Time since Security End of Life (Est.) : >= 6 months  Thank you.
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/e... See more...
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/etc/system/local/inputs.conf with the following contents. ( the reason I do this is to make sure the host field for forwarded internal logs doesn't contain the FQDN like hostname in server.conf [default] host = <name of this host> When I get to the step where I run: splunk add cluster-member -current_member_uri https://current_member_name:8089 It works, but /opt/splunk/etc/system/local/inputs.conf is replicated from the current_member_name And, if I run something like: splunk set default-hostname <name of this host> ... it modifies inputs.conf on EVERY node of the cluster. Diving into this I believe this is happening because of the Domain Add-On DA-ESS-ThreatIntelligence which contains a server.conf file in it's default directory. (why this would be, I've no idea) contents of /opt/splunk/etc/shcluster/apps/DA-ESS-ThreatIntelligence/default/server.conf on our Cluster Deployer - which is now delivered to all cluster members. [shclustering] conf_replication_include.inputs = true It seems to me that it's this stanza that is causing the issue. Am I on the right track? And why would DA-ESS-ThreatIntelligence be delivered with this particular config? Thank you.
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHE... See more...
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHEL arm incompatibility installed on the virtual machine. Then I rented a virtual machine from azure and installed Splunk SOAR there. Splunk enterprise is installed on my local network. First, I connected Splunk Enterprise to SOAR by following the instructions in this video (https://www.youtube.com/watch?v=36RjwmJ_Ee4&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=2) and test connectivity gave successful results. Then I tried to connect SOAR to Splunk Enterprise by following the instructions in this video (https://www.youtube.com/watch?v=phxiwtfFsEA&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=3), but I had trouble connecting soar to Splunk because Splunk SOAR and Splunk Enterprise Security are on different networks. In the most common example I came across, SOAR and Splunk Enterprise Security are on the same network, but they are on different networks. What should I write to the host ip here when trying to connect SOAR? What is the solution? Thanks for your help.
can you create searches using the REST API in splunk cloud
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing t... See more...
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing the number of months, i.e.: 24, 36, 48, etc.  What I'm attempting to do is take that number and create a new field which takes today's date and then subtracts the number of months to derive a prior date. For example, if the # of months is 36, then the field would display "08/29/2021" ; essentially the same thing that this is doing:  https://www.timeanddate.com/date/dateadded.html?m1=8&d1=29&y1=2024&type=sub&ay=&am=36&aw=&ad=&rec= I'm not exactly sure where to begin with this one, so any help getting started would be greatly appreciated. Thank you!
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_numbe... See more...
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_number output=model| eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q")| eval before=keystone_time-10| eval after=_time+10| eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q")| table keystone_time, serial_number, message, model, after| I would like to take the after and serial fields, use these fields to search construct a main  search like search index="june_analytics_logs_prod" serial=$serial_number$ message=*glow_v:* earliest=$keystone_time$ latest=$after$| Each event yielded by the subsearch yields a time when the event occured I want to find events, matching the same serial, with messages containing "glow_v" within 10 seconds after each of the subsearch events  
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. ... See more...
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. https://docs.splunk.com/Documentation/Splunk/9.3.0/admin/Savedsearchesconf https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Wheretofindtheconfigurationfiles According to the config file precedence page, the priority of savedsearches is determined by the application/user context, it is a reverse lexicographic order. That is, the configuration from add-on B overrides the configuration from add-on A. I have savesearch defined in addon A (an addon from Splunkbase). There is a missing index call in the SPL. I created app B with savedsearches.conf. I created an identically named "stanza" there and provided a single parameter "search=". In the parameter I put a new SPL query that contains the paricula index call. I was hoping that my new add-in named "B" would override the search query in add-in A, but it didn't. Splunk reports that I have a duplicate configuration. I hope I described this in understandable way. I must be missing something.