All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does anyone have examples of how to use Splunk Phantom to investigate and remediate phishing emails?
Does anyone have examples of how to use Splunk Phantom to hunt for threats?
Does anyone have examples of how to use Splunk Phantom to protect an EC2 group from malicious traffic?
Does anyone have examples of how to use Splunk Phantom to determine if an IP address is malicious?
Does anyone have examples of how to use Splunk Phantom to automatically contain malicious insiders?
Does anyone have examples of how to use Splunk Phantom to investigate and remediate malware infections?
Does anyone have examples of how to use Splunk Phantom to prompt an analyst to block an endpoint?
Does anyone have examples of how to use Splunk Phantom to investigate and contain ransomware?
I've been trying to get a Splunk report for all users who logged into the domain controller. I have tried several options with no luck so far. Any help would be greatly appreciated.
how do i calculate the average of logs received from a sourcetype over last 30 days and then compare if percentage dip/drop is more than 70% in last 24 hours
I have CSV like this- PPAGE_ID1 PPAGE_ID2 PPAGE_ID3 PPAGE_ID4 PPAGE_ID5 PPAGE_ID6 1-Jan 123 123 123 123 123 123 2-Jan 456 456 456 456 456 456 3-Jan 789 789 789 789 789 7... See more...
I have CSV like this- PPAGE_ID1 PPAGE_ID2 PPAGE_ID3 PPAGE_ID4 PPAGE_ID5 PPAGE_ID6 1-Jan 123 123 123 123 123 123 2-Jan 456 456 456 456 456 456 3-Jan 789 789 789 789 789 789 4-Jan 98 98 98 98 98 98 5-Jan 87587 87587 87587 87587 87587 87587 how can I take average by PPAGE_ID6 or PPAGE_ID100 ? Please help.
Hello, I have installed Service Now Add on App, my service now administrator has followed all the steps needed from the Service now side. Using the alert action with ServiceNow incident integrat... See more...
Hello, I have installed Service Now Add on App, my service now administrator has followed all the steps needed from the Service now side. Using the alert action with ServiceNow incident integration works fine and creates incidents in service now. However, we have limited fields that we can define in the ServiceNow alert action like we cannot define field IMPACT and Servicenow auto assigns the impact. So I wanted to use a custom generating command that gives me flexibility to generate the SeviceNow incident with additional fields as parameters. Here is my search (My alert condition if servers exceed > 90% cpu) raise ServiceNow incident index=os host=* sourcetype=cpu cpu=all NOT( [| inputlookup servers.csv | where status="decom" OR status="complete blacklist" OR status="DC Outage" | rename target as host | table host]) | eval PercentCPULoad = 100 - pctIdle | stats min(PercentCPULoad) as PercentCPULoad by host | eval hostname=upper(mvindex(split(host,"."),0)) | where PercentCPULoad >= 90 | eval timestamp=strftime(now(),"%Y-%m-%d %H:%M:%S") | eval Impact = 1 | snowincident --account "ServiceNow Dev" --category "Hardware" --correlation_id timestamp.":".hostname --impact 1 --state 1 --contact_type "Email" --short_description "Nishad - Splunk Created - CPU utilization is".PercentCPULoad." on ".hostname." Threshold - 90 <= ".PercentCPULoad." <=100" --assignment_group "Tools Testing Group" ci_identifier=hostname However, this doesn't work and I get below error message. *Error in 'snowincident' command: This command must be the first command of a search. * As per Splunk documentation, there certain steps that we need to carry on the ServiceNow server to integrate with Splunk, my SNOW administrator confirmed that he has followed all the steps as per the below documentation. https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegratewithSplunkEnterprise Can you please suggest what is missing, for searching I am using the SNOW_TA app the command 'snowincident' is not detected.
After upgraded Splunk app for AWS from version 5.02 to a later version 5.x, SH is not started with the following error: Problem parsing indexes.conf: Cannot load IndexConfig: stanza=aws_vpc_flow_l... See more...
After upgraded Splunk app for AWS from version 5.02 to a later version 5.x, SH is not started with the following error: Problem parsing indexes.conf: Cannot load IndexConfig: stanza=aws_vpc_flow_logs Required parameter=homePath not configured Validating databases (splunkd validatedb) failed with code '1'. Anyone with the same issue and how you get it resolved? Thanks,
Does anyone know if its possible as part of a workflow action that an event can be tagged? I would love to be able to add a tag to specific events indicating the event was acknowledged after runnin... See more...
Does anyone know if its possible as part of a workflow action that an event can be tagged? I would love to be able to add a tag to specific events indicating the event was acknowledged after running a specific action on the event (sending event info to 3rd party app) Thanks!
Heavy Forwarder is RHEL 7.7 Splunk binaries are 7.2.9.1 TA is version 3.5.8 (3.6.8) does the same. We're getting the data and when one looks at the events they have proper unix timestamps in ... See more...
Heavy Forwarder is RHEL 7.7 Splunk binaries are 7.2.9.1 TA is version 3.5.8 (3.6.8) does the same. We're getting the data and when one looks at the events they have proper unix timestamps in them but, when they are indexed they all get a time of midnight. We tried this a few days ago on old VM (RHEL 6 & Splunk 6.6.12.1 ) that just couldn't keep up with the volume but, it did seem to timestamp properly.... Moving it to the new VM is when we found the timestamp issue. How could I correct the timestamps in splunk?
Hi Wanted to subract the subquery results from main query. i.e index=main source=/folder/abc.csv |table customername - [index=main source=/folder/xxx.csv |table name ] can this be achieva... See more...
Hi Wanted to subract the subquery results from main query. i.e index=main source=/folder/abc.csv |table customername - [index=main source=/folder/xxx.csv |table name ] can this be achievable ? i want to get only the names which are not common from both the files. Thanks
how to find Top 10 processes per hour i need to Capture CPU, RAM, and Process threads
I have a log that I am trying to parse and I am unable to figure this out. It looks like a type of XML file. Here is an example: <ErrorMessage Id='20200130111127151' Date='1/30/2020' Time='11:11 ... See more...
I have a log that I am trying to parse and I am unable to figure this out. It looks like a type of XML file. Here is an example: <ErrorMessage Id='20200130111127151' Date='1/30/2020' Time='11:11 AM' > <RequestInformation Hostname='1.2.3.4' HostAddress='5.6.7.8' HostBrowser='Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services Client Protocol 4.0.30319.42000)' ReferringPage='' RequestType='POST' ContentLength='505' RawUrl='/dir/subdir/filename.asmx'> <Browser Type='IE6' Browser='IE' Version='6.0' Platform='Unknown' SupportsFrames='True' SupportsJavascript='True' SupportsTables='True'SupportsCookies='True'/> <Cookies> </Cookies> <Form> </Form> </RequestInformation> <Exception Message='ORA-01017: invalid username/password; logon denied'> <StackTrace> <![CDATA[ at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure, Boolean bCheck, Int32 isRecoverable, OracleLogicalTransaction m_OracleLogicalTransaction) at Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, Object src, OracleLogicalTransaction m_oracleLogicalTransaction) at Oracle.DataAccess.Client.OracleConnection.Open() at dhss.webservice.login_ws.MExecuteComponent.AuthenticateToAPP(String UserID, String Password, String DBInstance, String ServerIP, String ServerPort) ]]> </StackTrace> </Exception> </ErrorMessage> I have the Add-on for Oracle database installed, but it don't seem to work with this one.
How can I properly extract just the client that is doing the query from the below log entries. I noticed that on some log entries the word client is followed by @xxxxx characters and for some it does... See more...
How can I properly extract just the client that is doing the query from the below log entries. I noticed that on some log entries the word client is followed by @xxxxx characters and for some it doesn't. Splunk field extractions had the below extraction, but it adds the word client to some of the IP's. Any help is appreciated. Thanks. ^(?:[^ \n]* ){5}(?P[^#]+) 2020-01-30T12:50:39-05:00 173.12.5.49 named[15584]: client @0x7f74cc307f80 173.27.28.143#50046 (www.google.ru): query: www.google.ru IN A + (173.20.3.47) 2020-01-30T12:50:21-05:00 173.19.9.46named[15584]: 30-Jan-2020 12:50:21.069 client 173.24.28.149#50769: UDP: query: sync3.adsniper.ru IN A response: SERVFAIL +
We have cases in which there is no date in the log files, meaning, only the time of the event is in the data. What can we do in such cases?