All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I usually onboard Windows Server 2008 and newer but 2003 it is not working with below Stanza  # Windows platform specific input processor. [WinEventLog://Application] disabled = 0 [WinEven... See more...
Hi all, I usually onboard Windows Server 2008 and newer but 2003 it is not working with below Stanza  # Windows platform specific input processor. [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 is it possible to read the files like this? [monitor://C:\WINDOWS\System32\config\AppEvent.Evt] Best, N.
Question: How can we find diff between log statements before and after a given date.  Applicability:  Let's say we release a new application code and I want to be able to see all new events that app... See more...
Question: How can we find diff between log statements before and after a given date.  Applicability:  Let's say we release a new application code and I want to be able to see all new events that application has started logging.  Now definition of new is very vague here but any suggestion would help.  Idea is that splunk should be able to compare type of events were being logged earlier and only show new events that were not present before.  That would help finding any new Exceptions Errors or Warning that are being logged and not yet surfaced as a failed customer interaction.  Example:  After a new code release,  our application started logging an WARN event regarding "open file handlers" that kept building up over the time and ultimately reached a stage where no more unix file handlers were available to process any new request. 
Hi, I have a custom search get input as raw string, but when I combine splunk don't understand that, it always return error  Example: |example rawstring="{"EventCode": "13","EventType": "SetValue",... See more...
Hi, I have a custom search get input as raw string, but when I combine splunk don't understand that, it always return error  Example: |example rawstring="{"EventCode": "13","EventType": "SetValue","TargetObject": "(?mi)Software[//\\\\]{0,2}Microsoft[//\\\\]{0,2}Windows[//\\\\]{0,2}CurrentVersion[//\\\\]{0,2}Run"}}"  Can anyone help me pass it, thanks in advance
Hi all, I have created a lookup table and imported it into SPLUNK. It has 2 columns, one called hosts the other called IPs. The columns are populated with the hosts I want to query. I'm very new to ... See more...
Hi all, I have created a lookup table and imported it into SPLUNK. It has 2 columns, one called hosts the other called IPs. The columns are populated with the hosts I want to query. I'm very new to SPLUNK and would like to create a search that returns errors/events worth investigating from the hosts specified in the lookup file. I'll display the results on a dashboard and will be checking this daily for preventative maintenance on my system. I'm just after events worth looking in to and need to filter out irrelevant events to save time. Can anyone help? Thanks in advance.
Hi all, First post here - So I'm a Splunk beginner & recently got this tricky task. So let's say I have these rows in my log file: 2020-01-01: error778 2020-01-02: error778 2020-01-03: error778 ... See more...
Hi all, First post here - So I'm a Splunk beginner & recently got this tricky task. So let's say I have these rows in my log file: 2020-01-01: error778 2020-01-02: error778 2020-01-03: error778 2020-01-16: error778 2020-02-01: error778 2020-02-04: error778 2020-02-06: error778 2020-02-10: error778 2020-02-18: error778 2020-02-19: error778 On Jan 2020, we can see that there are 4 rows of error778 On Feb 2020, we can see that there are 6 rows of error778 This means, from Jan 2020 to Feb 2020, there's 50% diff/increase of error778. The questions: How can I get/display the % difference? Ideally, the delimiters can be days, month, year, or date ranges (such as, diff of error778 between 1-5 Jan 2020 and 5-31 Jan 2020). What's the best way to set an alert based on % (say, alert when diff is > 15%)? I'm able to display the daily/weekly/monthly trend of a keyword using timechart like below index=mylog "error778" | timechart span=1month count by date     But I believe it's far from what I need. Any help would be appreciated, thanks.
<title> A B </title> how to add line breaks between A And B
hi. I have a splunk server that is on windows and a vmware that windows on it too for forwarding data from vmware to host system. I read documents and do it step by step to  install and configure th... See more...
hi. I have a splunk server that is on windows and a vmware that windows on it too for forwarding data from vmware to host system. I read documents and do it step by step to  install and configure the universal forwarder on vm . the ports for receiver on two machines are open with firewall rules. but when i add data to splunk server the message show "There are currently no forwarders configured as deployment clients to this instance" and i search more but not fixed.  there are solution for this subject on this community but not work for me. please help. thanks.
Hello, i need help. I have 6500 IIN (like id) and put this id to lookup then tried search: index=alfa [|inputlookup IIN_oleg.csv |rename IIN as search | fields search]  They given result only for o... See more...
Hello, i need help. I have 6500 IIN (like id) and put this id to lookup then tried search: index=alfa [|inputlookup IIN_oleg.csv |rename IIN as search | fields search]  They given result only for one firs IIN in lookup. If i search whit out lookup just 10 IIN whit "OR" the give me 10 result
Hi, I have a log that has the following: dn=site,dn=com,dn=au I would like to extract and concatenate all these fields into a single capture group with periods between the words so the extracted fi... See more...
Hi, I have a log that has the following: dn=site,dn=com,dn=au I would like to extract and concatenate all these fields into a single capture group with periods between the words so the extracted field looks like site.com.au How can I do this with regex?
Hi, I'm doing some custom regex extractions for various fields and often they'll be under a bigger field for example requesterDN=\"ou=*,uid=*... Is there a way to have a period character (.) in the... See more...
Hi, I'm doing some custom regex extractions for various fields and often they'll be under a bigger field for example requesterDN=\"ou=*,uid=*... Is there a way to have a period character (.) in the name of a regex capture group? And if so, how?
    I have a field named Msg which contains json. That json contains some values and an array. I need to get each item from the array and put it on its own line (line chart line) and also get one o... See more...
    I have a field named Msg which contains json. That json contains some values and an array. I need to get each item from the array and put it on its own line (line chart line) and also get one of the header values as a line. So on my line chart I want a line for each of:  totalSorsTime, internalProcessingTime, remote_a, remote_b, etc The closest I can get is this-   index=wdpr_S0001469 source="*-vas-latest*" "Orchestration Summary" | spath input=Msg <<<< Msg field contains the json | table _time, totalTime, totalSorsTime, internalProcessingTime, sorMetrics{}.sor, sorMetrics{}.executionTimeMs     Any nudge in the right direction would be greatly appreciated!     { "totalTime": 2820, "totalSorsTime": 1505, "internalProcessingTime": 1315, "sorMetrics": [ { "sor": "remote_a", "executionTimeMs": 77 }, { "sor": "remote_b", "executionTimeMs": 27 }, { "sor": "remote_c", "executionTimeMs": 759 }, { "sor": "remote_d", "executionTimeMs": 199 }, { "sor": "remote_e", "executionTimeMs": 85 }, { "sor": "remote_f", "executionTimeMs": 252 } ] }              
My long set of SPL starts with the typical filtering on the primary search line. It then uses various eval, foreach, streamstats and eventstats commands to process the data for a big stats aggregatio... See more...
My long set of SPL starts with the typical filtering on the primary search line. It then uses various eval, foreach, streamstats and eventstats commands to process the data for a big stats aggregation command. Here is the problem or at least a gap in my misunderstanding: Early in the SPL I use a "| where" command to eliminate events not containing a specific value. This works great. The results filter down to 351,513 events. However, between this where command and the line just before the big stats command, I only use eval, foreach, streamstats and eventstats commands ... and the search results increase by 29. I thought each of these commands merely modified / created fields within the events. There are now 351,532 events instead of the 351,513 events. So the question is ... Can an eval, foreach, streamstats or eventstats ever INCREASE the number of search results or am I just misinterpreting the results.
I have an index1/source1/sourcetype1 of events that is several "million" records each day.  I have a second index1/source1/sourcetype2 that is several hundred records each day Several times a day I... See more...
I have an index1/source1/sourcetype1 of events that is several "million" records each day.  I have a second index1/source1/sourcetype2 that is several hundred records each day Several times a day I must execute a JOIN command to associate (1) sourcetype1 field with (1) sourcetype2 field, with each run of the query covering the last 2 weeks.  The associations between query1 and query2 change or are updated with each run.  The output is not static (changes with each run), which means the output of the last query is no longer valid since the data in query2 changes.  Is there a better way to address this?  KB or Lookup won't work since the output of query2 changes the outcome, and saving the output of query1 is not practical (millions of events) index=index1 sourcetype=sourcetype1 field=common | join common [ search index=index1 sourcetype=sourcetype2 field=common field=changing] | table common, changing, field3, field4, field5, ......
Hi, We are sending a reduced size logs to out splunk to do some smarts. We realized for the past year or so one of our alerts is not working at all. Between that year we have upgraded splunk from ... See more...
Hi, We are sending a reduced size logs to out splunk to do some smarts. We realized for the past year or so one of our alerts is not working at all. Between that year we have upgraded splunk from 6.5.2 to latest 8.2.1 and also migrated it from the entire VM it sits on.   index=clean_security_events earliest=-1h | stats count as Events by SG mx_ip | join SG [search index=clean_security_events earliest=-720h latest=-1h | bin span=1h _time | stats count by SG _time | streamstats mean(count) as Average, stdev(count) as Deviation, max(count) as Peak by SG | dedup SG sortby -_time | eval Average = round(Average) | eval Variance = Deviation / Average] | where Events > (Average + (Deviation * (Variance + 10))) AND Events > (Average * 20) AND Events > 20000 AND Events > Peak AND Average > 50 | lookup mx2cx mx_ip | table ServerGroup mx_ip cx Events Average   The general idea is we send reduced security events from our app and use the above to determine if a given SG (hence the stat count as Events) is generating sudden high events compared to the last 30 days. Upon trial and error if I narrow down to one mx_ip out of the 100s it works. I suspect that the subsearch is either generating too many events or the result are taking too long for the parent search and as a result we are getting empty tables. Any idea how to fix this? My understanding is I can increase the limits but it is not recommended.  I was thinking to use some ML toolkit to detect outlier and that way I can replace two alerts (one for sudden uptick and one for sudden downtick)
Hello All,   I have Fire Brigade TA v2.0.4 installed on all my indexers in my 20 node cluster.  I have the app installed on my DMC host.  I do did the default configuration, which is to allow the s... See more...
Hello All,   I have Fire Brigade TA v2.0.4 installed on all my indexers in my 20 node cluster.  I have the app installed on my DMC host.  I do did the default configuration, which is to allow the saved search to populated the "monitored_indexes.csv" file on all the indexers.  When I bring up the app and start to research the indexes I only see about 20 indexes in the Fire Brigade app.  Splunk monitoring counsole says there are a total of 91 (internal and non-internal).  So the configuration is quite simple: TA installed on all indexers in a 20 node cluster App installed on DMC TA is not installed on DMC search head and is not installed on the cluster master.  From what I can tell it should just work.  It has been installed for months and I still can not get it to recognize all the indexes we have in our environment.  Ideas?   thanks Ed
Hi Team, I am trying to run below query .. now here problem is its not showing any  "Blocked" data .. its showing only "Non access Not Blocked " .. is there any syntax error in * OR %? please sugges... See more...
Hi Team, I am trying to run below query .. now here problem is its not showing any  "Blocked" data .. its showing only "Non access Not Blocked " .. is there any syntax error in * OR %? please suggest .. :::|| eval BlockedStatus = case(Like(src,"11.11.111.%") AND act= "REQ_BLOCKED*" ,"Blocked", Like(src,"222.22.222.%") AND act="REQ_BLOCKED*","Blocked", Like(src,"11.11.111.%") AND act!="REQ_BLOCKED*","Not Blocked", Like(src,"222.22.222..%") AND act!="REQ_BLOCKED*","Not Blocked", NOT Like(src,"11.11.111.%") AND act="REQ_BLOCKED*","Non access Blocked", NOT Like(src,"222.22.222..%") AND act="REQ_BLOCKED*","Non access Blocked", NOT Like(src,"11.11.111.%") AND act!="REQ_BLOCKED*","Non access Not Blocked", NOT Like(src,"222.22.222..%") AND act!="REQ_BLOCKED*","Non access Not Blocked") | stats count by Customer , BlockedStatus | rename Customer as "Local Market",count as "Total Critical Events"
I get "intelligence down load of "mitre_attack" has failed. On this date. Multiple reties has failed. I checked the URL for it but am not sure what the correct URL is supposed to be. I appreciate you... See more...
I get "intelligence down load of "mitre_attack" has failed. On this date. Multiple reties has failed. I checked the URL for it but am not sure what the correct URL is supposed to be. I appreciate your help in advance
I need to setup an alert to track when ever someone delete any file from a shareholder from windows 2016 file server. I need to know which log need to ingest to Splunk for setting up this alert. If y... See more...
I need to setup an alert to track when ever someone delete any file from a shareholder from windows 2016 file server. I need to know which log need to ingest to Splunk for setting up this alert. If you have the splunk query for this that will be help full.
I have installled splunk/splunk:latest and exposed it on 8000 per the instructions I can get to the GUI on localhost:8000 and retrieved a HEC token when I try to validate the install using curl -k... See more...
I have installled splunk/splunk:latest and exposed it on 8000 per the instructions I can get to the GUI on localhost:8000 and retrieved a HEC token when I try to validate the install using curl -k https://localhost:8088/services/collector/event -H "Authorization: Splunk my-hec-token" -d '{"event": "hello world"}' I get this ERROR Failed to connect to localhost port 8088: Connection refused Note: I am using the correct token
I am using Python to access and saved search. I want to then run this saved search.  I understand how to do this using the .dispatch method. The issue I am having is that I have a search with search ... See more...
I am using Python to access and saved search. I want to then run this saved search.  I understand how to do this using the .dispatch method. The issue I am having is that I have a search with search variables, for example | eval state="$state$" Using SPL I simply call | savedsearch "somesearch" state="state" In Python I have seen with JS you can pass {state: somestate} in the .dispatch() method. In Python however, any time I attempt to pass a parameter with these values I get various errors. Any help in the direction of passing a variable name would be great! Thanks