All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I have created a playbook and am trying to run it from an event. But the playbook does not populate when I click on run playbook. what is that I am doing wrong?
Hi @anlePRH  Are you already producing the table you shared in your original post, or is that what you are wanting to get to? You should be able to use the following after your REX: | stats list(S... See more...
Hi @anlePRH  Are you already producing the table you shared in your original post, or is that what you are wanting to get to? You should be able to use the following after your REX: | stats list(SourceIP) as IPs, count as Count by Subnet  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have created a playbook and am trying to run it from an event I have configured. but when I click on the run playbook my playbook does not show in the list what is that I am missing?
Hi @vishalduttauk  Have you seen  Microsoft O365 Email Add-on for Splunk? The description of this include "The Microsoft® O365® Email Add-on for Splunk® ingests O365 emails via Microsoft’s Graph API... See more...
Hi @vishalduttauk  Have you seen  Microsoft O365 Email Add-on for Splunk? The description of this include "The Microsoft® O365® Email Add-on for Splunk® ingests O365 emails via Microsoft’s Graph API." so I think this might give you the email content that you need! Check it out and let me know if you need any further help!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This project is to test for a potential on-prem to cloud migration. I need to thaw several terabytes of frozen splunk data. It has been frozen over the past several years from an indexer cluster to ... See more...
This project is to test for a potential on-prem to cloud migration. I need to thaw several terabytes of frozen splunk data. It has been frozen over the past several years from an indexer cluster to offline repos. The storage array where my existing indexer cluster resides doesn't have enough disk space to bring it all back. I have a secondary storage array that I can use that has plenty of space, but I can't move my existing cluster. I need help understanding/deciding: Should I build new indexers on the secondary array, add them to the existing cluster and thaw data to them. Should I build a new cluster with new indexers on the secondary array and thaw the data there.  Maybe it's easiest to just build one new standalone indexer on the secondary array and thaw all data to this one new standalone indexer? The data will need to be searchable/exportable, I have only one search head (no search head cluster).
Hi there,   We have an on prem Exchange mailbox which we monitor via the Exchange logs. We pick out key words from the subject line to trigger alerts.   Our mailbox is moving into Exchange online... See more...
Hi there,   We have an on prem Exchange mailbox which we monitor via the Exchange logs. We pick out key words from the subject line to trigger alerts.   Our mailbox is moving into Exchange online so i've been working with our Azure team and managed to integrate Splunk Enterprise (on prem) with a test online mailbox and so far i am ingesting generic information about the mailbox via the Splunk Add-on for Microsoft Office 365. Information like information like Issue Warning Quota (Byte), Prohibit, Send Quota (Byte) and Prohibit Send/Receive Quota. The 2 inputs i've created are Message Trace and Mailbox (which ingests the mailbox data above). What i want to do is to ingest the emails themselves. The key information like subject, the body (if possible), from address and to address. Is this possible using is add on?
Hello, This is installed directly on the splunk cloud instance. I just started using splunk about a week ago. To my knowledge, I don't have cli access to modify any files. I also don't see why I wou... See more...
Hello, This is installed directly on the splunk cloud instance. I just started using splunk about a week ago. To my knowledge, I don't have cli access to modify any files. I also don't see why I would need to, as there is no mention of a need to in the instructions. They seem to have built everything you would need into the app configuration pages such as fields to input api key and whatnot.   I also found the thread you mentioned, but it seems no one was able to come up with a solution then either.
Hello,   I only have this one app from S1 installed on the indexer/searchhead which is in Splunk cloud.
It would be helpful to know what you've tried already and those efforts failed to meet expectations. Perhaps this will help. | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" | stats count as Coun... See more...
It would be helpful to know what you've tried already and those efforts failed to meet expectations. Perhaps this will help. | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" | stats count as Count, values(SourceIP) as IPs by Subnet
I currently have this to group IPs into subnets and list the counts, I want it to also show the IP it has listed aswell | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" example Subnet    Count  ... See more...
I currently have this to group IPs into subnets and list the counts, I want it to also show the IP it has listed aswell | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" example Subnet    Count   IPs 1.1.1       20            1.1.1.1, 1.1.1.2,1.1.1.3  How do I create another field or use the existing field to show what it has grouped?
"Important: The Splunk Connect for Kubernetes will reach End of Support on January 1, 2024. After that date, this repository will no longer receive updates from Splunk and will no longer be supported... See more...
"Important: The Splunk Connect for Kubernetes will reach End of Support on January 1, 2024. After that date, this repository will no longer receive updates from Splunk and will no longer be supported by Splunk. Until then, only critical security fixes and bug fixes will be provided. Splunk recommends migrating to Splunk OpenTelemetry Collector for Kubernetes. Please refer to this migration guide for more details."
Good Salutations! That error is indicating that credentials cannot be found. It can typically happen when there are multiple SentinelOne Apps installed on the same instance (App, IA, TA).  If there... See more...
Good Salutations! That error is indicating that credentials cannot be found. It can typically happen when there are multiple SentinelOne Apps installed on the same instance (App, IA, TA).  If there is more than one installed, remove the ones not for that tier (App => SearchHeads, IA=> HF/IDM, TA=>IDX). These should be fully removed, "rm rf" if you will, not just disabled. Removed.  Once removed, re-configure the app and try again.  Thanks!
The reason the time period of the execution frequency is not the same as the execution frequency is so we don't miss a 3-event occurrence that straddles two periods. If we ran it once every 60 minute... See more...
The reason the time period of the execution frequency is not the same as the execution frequency is so we don't miss a 3-event occurrence that straddles two periods. If we ran it once every 60 minutes looking back 60 minutes, we wouldn't get duplicates, but we'd also miss the case in which one event occurred just before the last 60 minutes, and 2 occurred just after the start of the period. I will investigate stats count BY host, thank you.
Hi @spm807 , usually the time period is the same of the execution frequency just to avoid duplicates. Anyway, you could copnfigure throttling to avoid that the same values fire the alert. Throttli... See more...
Hi @spm807 , usually the time period is the same of the execution frequency just to avoid duplicates. Anyway, you could copnfigure throttling to avoid that the same values fire the alert. Throttling id configured by the fields in your alert. In other words, if you have a stats count BY host, you can configure the throttling by host, so if the same host will b e fired the alert will not be triggered. Ciao. Giuseppe
I would like to do a search of a rolling 60-minute period, looking for 3 or more occurrences in that period. I set up a Splunk alert to run every 15 minutes, looking back 1 hour, which works, but the... See more...
I would like to do a search of a rolling 60-minute period, looking for 3 or more occurrences in that period. I set up a Splunk alert to run every 15 minutes, looking back 1 hour, which works, but then I get multiple alerts for the same 3 events, as they still are in the 60-minute look back period for multiple runs, which happen every 15 minutes. How can I set this up (or code it) so it only reports the 3 occurrences in the previous 60 minutes once?
thank you for the quick response. I have not configured.  since I still learning a little more detail would be a great help. do I add another action block for output or can I configure the end block ... See more...
thank you for the quick response. I have not configured.  since I still learning a little more detail would be a great help. do I add another action block for output or can I configure the end block to do it? can you please share more info to help me 
Hi @Poojitha  Do you see anything in your $SPLUNK_HOME/var/log/splunk/splunkd.log relating to this? Any python errors, stack traces etc?  
Hi @Sidpet  Have you configured the playbook to output the fields you are interested in seeing?  Check out https://docs.splunk.com/Documentation/SOAR/current/Playbook/CreatePlaybooks#:~:text=constr... See more...
Hi @Sidpet  Have you configured the playbook to output the fields you are interested in seeing?  Check out https://docs.splunk.com/Documentation/SOAR/current/Playbook/CreatePlaybooks#:~:text=constructing%20your%20playbook.-,Add%20outputs%20to%20your%20playbooks,-You%20can%20add for more info on how to Add outputs to your playbooks.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and... See more...
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and trying to learn SOAR.
Years later, same question. It seems to be not possible to configure custom http headers. It's mandatory for us to consume a Threat Intelligence Feed where basic auth is not supported.  Is there a... See more...
Years later, same question. It seems to be not possible to configure custom http headers. It's mandatory for us to consume a Threat Intelligence Feed where basic auth is not supported.  Is there a different way to get this issue solved somehow?