All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to replicate all the alerts present in one environment(production) to another environment(non-prod). where as both are present in same place i mean url is same but indexes of both envi... See more...
Is there a way to replicate all the alerts present in one environment(production) to another environment(non-prod). where as both are present in same place i mean url is same but indexes of both environments is different. And the main thing is we cannot access backend as we have only limited access. Can anyone please help me through it in achieving it.   Thanks in Advance
hi I have a table as shown below. I want to get the % of total for each status for previous 6 days. How do i write a query to get the same (% of total and also to query for previous 6 days). DATE... See more...
hi I have a table as shown below. I want to get the % of total for each status for previous 6 days. How do i write a query to get the same (% of total and also to query for previous 6 days). DATE A B C 2021-05-19 14 33 123 2021-05-18 45 12 456 2021-05-17 4 6 213 2021-05-16 5 8 564 2021-05-15 4 9 987 2021-05-14 4 0 543
I have a file which is being indexed(say today) and then again indexed after updating(say tomorrow). I have to compare the events of the two versions and display the event(s) which is present in the ... See more...
I have a file which is being indexed(say today) and then again indexed after updating(say tomorrow). I have to compare the events of the two versions and display the event(s) which is present in the  new one but not in old or vice versa. Can any help?
Hi Splunkheads,  Need some advice here. I have built a simple lookup table and simple search for known bad ip addresses. My search runs across the lookup table, and returns a table for any matches a... See more...
Hi Splunkheads,  Need some advice here. I have built a simple lookup table and simple search for known bad ip addresses. My search runs across the lookup table, and returns a table for any matches across the environment.  Here is my search:   | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, _time, All_Traffic.action, All_Traffic.bytes, index, sourcetype | lookup ioc_ip.csv ioc_ip as All_Traffic.src OUTPUT ioc_ip as src_found | lookup ioc_ip.csv ioc_ip as All_Traffic.dest OUTPUT ioc_ip as dest_found | where !isnull(src_found) OR !isnull(dest_found) | fields - src_found, dest_found | sort -_time   I have been asked to auto-expire rows in the lookup after 30 days. The logic would be something like: If date ioc_email_date older than 30 days: Delete row else: Run search   I have added dates to my lookup table. Here is a dummy example of my lookup table: My questions: 1. Best format for the ioc_ip_date column? Would it be best to use Epoch time? Currently using this "2021-18-05" as per above. Happy to convert to any format. 2. Any suggestions on how to add the logic to the above search sample?    
I have entered the appropriate host port and database but why does the host and server port indexer appear .. please help me    
I am trying to find events based on when they were initially logged and grouped by some column. For example,  from the below table, I want find those total count of unique "keyId" that logged first g... See more...
I am trying to find events based on when they were initially logged and grouped by some column. For example,  from the below table, I want find those total count of unique "keyId" that logged first group by "parent1" and using timechart April 20 | A01 | 2  (DATT-001, DATT-002 first appeared in May ) May 20 | A02 | 1   Basically first appearance of "keyId" grouped by parent1 and shown by timeline _time keyId parent1 parent2 parent3 status eventdetails 2020-04-19T23:47:21.000+10:00 DATT-001 A01 B01 C01 Pass   2020-04-20T2:47:21.000+10:00 DATT-001 A01 B01 C01 Fail   2020-05-20T2:47:21.000+10:00 DATT-001 A01 B01 C01 Fail   2020-06-20T2:47:21.000+10:00 DATT-001 A01 B01 C01 Fail   2020-04-20T2:47:21.000+10:00 DATT-002 A01 B01 C01 Fail   2020-05-20T2:47:21.000+10:00 DATT-002 A01 B01 C01 Fail   2020-05-20T2:47:21.000+10:00 DATT-003 A02 B01 C01 Fail     Any help please ?
I've created a custom metric that populates the events tab in database visibility with data from a query, is it possible to create an action that matches these to send them via email, HTTP request, e... See more...
I've created a custom metric that populates the events tab in database visibility with data from a query, is it possible to create an action that matches these to send them via email, HTTP request, etc? I assume the other option is to pull them via the API but I'd rather not have to write something specifically to do this. Thanks! Jeremy.
Need help with a query please: I have ticket data where the life cycle is Assigned, Work in Progress, Fixed, Closed and the ticket is assigned to our group ABC. I want to display only the tickets wh... See more...
Need help with a query please: I have ticket data where the life cycle is Assigned, Work in Progress, Fixed, Closed and the ticket is assigned to our group ABC. I want to display only the tickets which are assigned and in progress to our group ABC. My end goal is to show ABC group's ticket count which are not fixed and closed. I have the below query so far: TICKET         STATUS                         GROUP TIC12345   Assigned                        ABC                        Work in Progress         ABC                        Fixed                                DEF                       Closed                              DEF index=* source=* group=ABC | stats latest(status) as l_status latest(group) as l_group by TICKET | search NOT l_status in("Fixed", "Closed") Result: TICKET          STATUS                  GROUP TIC12345  Work in Progress   ABC I was able to get the data however, I'm also getting ticket which are closed(because it is initially assigned to ABC and later it was closed by DEF). Appreciate your help! 
Hey Splunkers, any possibility of having 2 separate incident review dashboard - 1st for production usecase - 2nd for Development/Test usecase 
I want to add more columns that will show the sessions.  Such as sudo su ssh etc.  Currently I have this: index="name of index" user=* | chart count by user, action | sort user Maybe I need two s... See more...
I want to add more columns that will show the sessions.  Such as sudo su ssh etc.  Currently I have this: index="name of index" user=* | chart count by user, action | sort user Maybe I need two separate searches?  One for action=failure and another for action=success?  I'm trying to breakdown what the totals are for success and failures.  Specifically I'd like to know when ssh sudo and su are being used.
Hi, So I have a goal to count user visits, but the log polls too frequently, so we are going to define a visit by one user per day. In this instance the data is not yet in splunk, but on an excel sp... See more...
Hi, So I have a goal to count user visits, but the log polls too frequently, so we are going to define a visit by one user per day. In this instance the data is not yet in splunk, but on an excel spreadsheet. I'm not very good with excel, so I want to add to splunk and use the bin feature. I have userid and date. I can use either the time field or the date field and I can reformat the date field, but currently the datefield is mm/d/yyyy. I can reformt if makes it easier.  Once I have my lookup, how do I use the equivalent bin _time span=1d where the time is now a date field? 
Can someone kindly help me. I am attempting to create a report for this " Litigation Hold status" in Office 365. Though I cannot find this name of the field  in the 365 index. I would need help on w... See more...
Can someone kindly help me. I am attempting to create a report for this " Litigation Hold status" in Office 365. Though I cannot find this name of the field  in the 365 index. I would need help on what I need to search to find the field  with this value
Hello all, Running the following search (direct count) at different times of the day for the same time period I receive different results; sourcetype=x index=y access_method="Explicit Proxy" | tab... See more...
Hello all, Running the following search (direct count) at different times of the day for the same time period I receive different results; sourcetype=x index=y access_method="Explicit Proxy" | table app,category,activity,user | dedup user | stats dc(user) by app I can use this search but also get different results for the same time period, last 90 days; sourcetype=x index=y access_method="Explicit Proxy" | table app,category,activity,user | dedup user | stats count by app Results look like this; App dc(user) app 1 499 app2 36 app3 19   Any suggestions on what maybe my issue? Thanks  
Hello!! I have a field value that looks like: abcd124567-1609173498 I only want to remove abcd-1609173498 and have the 124567 remain for the field value. Can someone help me with constructing a re... See more...
Hello!! I have a field value that looks like: abcd124567-1609173498 I only want to remove abcd-1609173498 and have the 124567 remain for the field value. Can someone help me with constructing a rex field=fieldname mode=sed format to get this done?
I've recently installed Splunk to begin learning how to use it, and the first thing I wanted to do was parse the logs from a pfsense firewall.  I believe that the TA-pfsense application is meant to h... See more...
I've recently installed Splunk to begin learning how to use it, and the first thing I wanted to do was parse the logs from a pfsense firewall.  I believe that the TA-pfsense application is meant to help parse the syslog information, but despite my best efforts I cannot get it working. My environment has the following: Splunk - 8.1.3 (single instance) pfSense - 21.02.2 sending logs in syslog format TA-pfsense v2.5 release March 3, 2021 Splunk is receiving the syslog events into an index called 'network' and the events are labelled with the default pfsense sourcetype but this is not being parsed into the various other types of pfsense:filterlog, pfsense:unbound etc. I grabbed the REGEX string from transforms.conf and did some testing against the events getting pulled into Splunk, it seems like the string is not formatted for the logs I have. I made the following changes: Original: REGEX = \w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s(?:[\w.]+\s)?(\w+) Updated: REGEX = \w{1,3}\s\w{4}-\w{1,2}-\w{1,2}T\d{1,2}:\d{1,2}:\d{1,2}.\d{1,6}-\d{1,2}:\w{1,2}\s\w+.?\w+.?\w+(?:[\w.]+\s)?(\w+) Admittedly I am very new to regex and so the above might be less than ideal, but it does seem to parse out the sourcetype.  However after crossing that hurdle it seems like all of the EXTRACT statements also don't match the log format Splunk is gathering. Is anyone else running a current version of pfSense with the latest TA-pfsense application and having similar issues?  Any pointers would be appreciated.  I've searched around but have not seen any current posts with people reporting a similar issue. Thanks!    
Good Afternoon, I am trying to figure out a way to iterate through a list whenever the value is counted one time. I'm hoping it'll make mq so that way my query is speedier. Here's my current query: ... See more...
Good Afternoon, I am trying to figure out a way to iterate through a list whenever the value is counted one time. I'm hoping it'll make mq so that way my query is speedier. Here's my current query: index=* eventtype IN(valueA,valueB,valueC) | stats count by eventtype and the result looks like this: eventtype                               count valueA                                        102 valueB                                        407 valueC                                       1034   What I'd like is a query where if the query finds the value in the field one time, move on to find the next value. This is how I want the output to look like: eventtype                               count valueA                                        1 valueB                                        1 valueC                                       1   Any help would be appreciated.
Hello, We're trying to get a UF on a Domain Controller to monitor two different OUs in the AD as follows:   [admon://AdminAccounts] targetDc = dc01.mydomain.com startingNode = OU="Administrative A... See more...
Hello, We're trying to get a UF on a Domain Controller to monitor two different OUs in the AD as follows:   [admon://AdminAccounts] targetDc = dc01.mydomain.com startingNode = OU="Administrative Accounts", DC=mydomain, DC=com index = admon [admon://ElevatedPrivs] targetDc = dc01.mydomain.com startingNode = "OU=Elevated Privileges", DC=mydomain, DC=com index = admon     The UF is running under a Domain Service Account with full read access to the tree. We're getting the following errors:   ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdQuery::OutputStartEvent: Failed to search attributes of root object: err='0x20' ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::OutputStartEvent: Failed in OutputStartEvent, ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::InitCollector: LoadContextState failed: (0x80004005)Unspecified error -- attempting to reload server path ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdQuery::OutputStartEvent: Failed to search attributes of root object: err='0x20' ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::OutputStartEvent: Failed in OutputStartEvent, ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::InitCollector: LoadContextState failed again with DCName='dc01.mydomain.com': (0x80004005)Unspecified error -- no more retries ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - ADMonitor::init: Failed to initialize Active Directory usn context. ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - ADMonitorThread::launchADMonitor: Failed to initialize ADMonitor='admon://ElevatedPrivs', targedDC='dc01.mydomain.com' ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdQuery::OutputStartEvent: Failed to search attributes of root object: err='0x20' ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::OutputStartEvent: Failed in OutputStartEvent, ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::InitCollector: LoadContextState failed: (0x80004005)Unspecified error -- attempting to reload server path ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdQuery::OutputStartEvent: Failed to search attributes of root object: err='0x20' ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::OutputStartEvent: Failed in OutputStartEvent, ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - AdEventCollector::InitCollector: LoadContextState failed again with DCName='dc01.mydomain.com': (0x80004005)Unspecified error -- no more retries ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - ADMonitor::init: Failed to initialize Active Directory usn context. ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-admon.exe"" splunk-admon - ADMonitorThread::launchADMonitor: Failed to initialize ADMonitor='admon://ElevatedPrivs', targedDC='dc01.mydomain.com'   We can't figure out what does (0x80004005)Unspecified error, or err='0x20' actually means. Are we missing something here? Is there a problem with having a space (" ") character in the OUs? Please advice
When I run the "aws" command as a normal user or root, it works. When I run the "aws" command as user splunk, it produces the python error: "ImportError: No module named httpsession". I believe th... See more...
When I run the "aws" command as a normal user or root, it works. When I run the "aws" command as user splunk, it produces the python error: "ImportError: No module named httpsession". I believe that this is because the version of python in ~splunk/bin is different than the system's version. How do I get the httpsession module installed to Splunk's version of python?
Hello, I ran my app against appinspect and received the following failure: check_for_expansive_permissions  A posix world-writable file was found. File: appserver/static/images/   Given I develo... See more...
Hello, I ran my app against appinspect and received the following failure: check_for_expansive_permissions  A posix world-writable file was found. File: appserver/static/images/   Given I developed my app in a windows environment, I created a Linux box that was running Splunk and changed then permissions manually. Then Packaged the app using Splunk's Slim Package command.  This got rid of the Failure but didn't allow me to submit my app to Splunkbase because Splunk was "Unable to Extract Package". can anyone point me in the right direction?  -Marco C.
Why avoid RAID5 on SSD when using SmartStore?