All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm doing some custom regex extractions for various fields and often they'll be under a bigger field for example requesterDN=\"ou=*,uid=*... Is there a way to have a period character (.) in the... See more...
Hi, I'm doing some custom regex extractions for various fields and often they'll be under a bigger field for example requesterDN=\"ou=*,uid=*... Is there a way to have a period character (.) in the name of a regex capture group? And if so, how?
    I have a field named Msg which contains json. That json contains some values and an array. I need to get each item from the array and put it on its own line (line chart line) and also get one o... See more...
    I have a field named Msg which contains json. That json contains some values and an array. I need to get each item from the array and put it on its own line (line chart line) and also get one of the header values as a line. So on my line chart I want a line for each of:  totalSorsTime, internalProcessingTime, remote_a, remote_b, etc The closest I can get is this-   index=wdpr_S0001469 source="*-vas-latest*" "Orchestration Summary" | spath input=Msg <<<< Msg field contains the json | table _time, totalTime, totalSorsTime, internalProcessingTime, sorMetrics{}.sor, sorMetrics{}.executionTimeMs     Any nudge in the right direction would be greatly appreciated!     { "totalTime": 2820, "totalSorsTime": 1505, "internalProcessingTime": 1315, "sorMetrics": [ { "sor": "remote_a", "executionTimeMs": 77 }, { "sor": "remote_b", "executionTimeMs": 27 }, { "sor": "remote_c", "executionTimeMs": 759 }, { "sor": "remote_d", "executionTimeMs": 199 }, { "sor": "remote_e", "executionTimeMs": 85 }, { "sor": "remote_f", "executionTimeMs": 252 } ] }              
My long set of SPL starts with the typical filtering on the primary search line. It then uses various eval, foreach, streamstats and eventstats commands to process the data for a big stats aggregatio... See more...
My long set of SPL starts with the typical filtering on the primary search line. It then uses various eval, foreach, streamstats and eventstats commands to process the data for a big stats aggregation command. Here is the problem or at least a gap in my misunderstanding: Early in the SPL I use a "| where" command to eliminate events not containing a specific value. This works great. The results filter down to 351,513 events. However, between this where command and the line just before the big stats command, I only use eval, foreach, streamstats and eventstats commands ... and the search results increase by 29. I thought each of these commands merely modified / created fields within the events. There are now 351,532 events instead of the 351,513 events. So the question is ... Can an eval, foreach, streamstats or eventstats ever INCREASE the number of search results or am I just misinterpreting the results.
I have an index1/source1/sourcetype1 of events that is several "million" records each day.  I have a second index1/source1/sourcetype2 that is several hundred records each day Several times a day I... See more...
I have an index1/source1/sourcetype1 of events that is several "million" records each day.  I have a second index1/source1/sourcetype2 that is several hundred records each day Several times a day I must execute a JOIN command to associate (1) sourcetype1 field with (1) sourcetype2 field, with each run of the query covering the last 2 weeks.  The associations between query1 and query2 change or are updated with each run.  The output is not static (changes with each run), which means the output of the last query is no longer valid since the data in query2 changes.  Is there a better way to address this?  KB or Lookup won't work since the output of query2 changes the outcome, and saving the output of query1 is not practical (millions of events) index=index1 sourcetype=sourcetype1 field=common | join common [ search index=index1 sourcetype=sourcetype2 field=common field=changing] | table common, changing, field3, field4, field5, ......
Hi, We are sending a reduced size logs to out splunk to do some smarts. We realized for the past year or so one of our alerts is not working at all. Between that year we have upgraded splunk from ... See more...
Hi, We are sending a reduced size logs to out splunk to do some smarts. We realized for the past year or so one of our alerts is not working at all. Between that year we have upgraded splunk from 6.5.2 to latest 8.2.1 and also migrated it from the entire VM it sits on.   index=clean_security_events earliest=-1h | stats count as Events by SG mx_ip | join SG [search index=clean_security_events earliest=-720h latest=-1h | bin span=1h _time | stats count by SG _time | streamstats mean(count) as Average, stdev(count) as Deviation, max(count) as Peak by SG | dedup SG sortby -_time | eval Average = round(Average) | eval Variance = Deviation / Average] | where Events > (Average + (Deviation * (Variance + 10))) AND Events > (Average * 20) AND Events > 20000 AND Events > Peak AND Average > 50 | lookup mx2cx mx_ip | table ServerGroup mx_ip cx Events Average   The general idea is we send reduced security events from our app and use the above to determine if a given SG (hence the stat count as Events) is generating sudden high events compared to the last 30 days. Upon trial and error if I narrow down to one mx_ip out of the 100s it works. I suspect that the subsearch is either generating too many events or the result are taking too long for the parent search and as a result we are getting empty tables. Any idea how to fix this? My understanding is I can increase the limits but it is not recommended.  I was thinking to use some ML toolkit to detect outlier and that way I can replace two alerts (one for sudden uptick and one for sudden downtick)
Hello All,   I have Fire Brigade TA v2.0.4 installed on all my indexers in my 20 node cluster.  I have the app installed on my DMC host.  I do did the default configuration, which is to allow the s... See more...
Hello All,   I have Fire Brigade TA v2.0.4 installed on all my indexers in my 20 node cluster.  I have the app installed on my DMC host.  I do did the default configuration, which is to allow the saved search to populated the "monitored_indexes.csv" file on all the indexers.  When I bring up the app and start to research the indexes I only see about 20 indexes in the Fire Brigade app.  Splunk monitoring counsole says there are a total of 91 (internal and non-internal).  So the configuration is quite simple: TA installed on all indexers in a 20 node cluster App installed on DMC TA is not installed on DMC search head and is not installed on the cluster master.  From what I can tell it should just work.  It has been installed for months and I still can not get it to recognize all the indexes we have in our environment.  Ideas?   thanks Ed
Hi Team, I am trying to run below query .. now here problem is its not showing any  "Blocked" data .. its showing only "Non access Not Blocked " .. is there any syntax error in * OR %? please sugges... See more...
Hi Team, I am trying to run below query .. now here problem is its not showing any  "Blocked" data .. its showing only "Non access Not Blocked " .. is there any syntax error in * OR %? please suggest .. :::|| eval BlockedStatus = case(Like(src,"11.11.111.%") AND act= "REQ_BLOCKED*" ,"Blocked", Like(src,"222.22.222.%") AND act="REQ_BLOCKED*","Blocked", Like(src,"11.11.111.%") AND act!="REQ_BLOCKED*","Not Blocked", Like(src,"222.22.222..%") AND act!="REQ_BLOCKED*","Not Blocked", NOT Like(src,"11.11.111.%") AND act="REQ_BLOCKED*","Non access Blocked", NOT Like(src,"222.22.222..%") AND act="REQ_BLOCKED*","Non access Blocked", NOT Like(src,"11.11.111.%") AND act!="REQ_BLOCKED*","Non access Not Blocked", NOT Like(src,"222.22.222..%") AND act!="REQ_BLOCKED*","Non access Not Blocked") | stats count by Customer , BlockedStatus | rename Customer as "Local Market",count as "Total Critical Events"
I get "intelligence down load of "mitre_attack" has failed. On this date. Multiple reties has failed. I checked the URL for it but am not sure what the correct URL is supposed to be. I appreciate you... See more...
I get "intelligence down load of "mitre_attack" has failed. On this date. Multiple reties has failed. I checked the URL for it but am not sure what the correct URL is supposed to be. I appreciate your help in advance
I need to setup an alert to track when ever someone delete any file from a shareholder from windows 2016 file server. I need to know which log need to ingest to Splunk for setting up this alert. If y... See more...
I need to setup an alert to track when ever someone delete any file from a shareholder from windows 2016 file server. I need to know which log need to ingest to Splunk for setting up this alert. If you have the splunk query for this that will be help full.
I have installled splunk/splunk:latest and exposed it on 8000 per the instructions I can get to the GUI on localhost:8000 and retrieved a HEC token when I try to validate the install using curl -k... See more...
I have installled splunk/splunk:latest and exposed it on 8000 per the instructions I can get to the GUI on localhost:8000 and retrieved a HEC token when I try to validate the install using curl -k https://localhost:8088/services/collector/event -H "Authorization: Splunk my-hec-token" -d '{"event": "hello world"}' I get this ERROR Failed to connect to localhost port 8088: Connection refused Note: I am using the correct token
I am using Python to access and saved search. I want to then run this saved search.  I understand how to do this using the .dispatch method. The issue I am having is that I have a search with search ... See more...
I am using Python to access and saved search. I want to then run this saved search.  I understand how to do this using the .dispatch method. The issue I am having is that I have a search with search variables, for example | eval state="$state$" Using SPL I simply call | savedsearch "somesearch" state="state" In Python I have seen with JS you can pass {state: somestate} in the .dispatch() method. In Python however, any time I attempt to pass a parameter with these values I get various errors. Any help in the direction of passing a variable name would be great! Thanks
In my search result, I have the "Description" field. The Description field contains both texts and 2 IP details. I want to check both IPs with my lookup table. If the IPs are not present in the lo... See more...
In my search result, I have the "Description" field. The Description field contains both texts and 2 IP details. I want to check both IPs with my lookup table. If the IPs are not present in the lookup then I need the result.  If the IPs are present in my lookup table then I want to filter the result.   Kindly help here.  
Does someone knows if it is still possible to pull the Exchange message tracking logs using the Microsoft Office 365 Reporting Add-on for Splunk? I have followed the setup instructions and it worked ... See more...
Does someone knows if it is still possible to pull the Exchange message tracking logs using the Microsoft Office 365 Reporting Add-on for Splunk? I have followed the setup instructions and it worked for 8 day circa then stopped working for a month, then suddenly started working again, but just for a day. Examining the logs I've noticed messages saying "HTTP 401 Unauthorized ... call not properly authenticated", but I've never changed credentials. In another question (https://community.splunk.com/t5/All-Apps-and-Add-ons/O365-message-tracking-logs/m-p/487992) I've read that " O365 no longer supports basic authentication for O365 to get those log files.", but it worked for a while so I do not understand. Did someone came up with a solution for this? Regards, -G.
Hello - I am using the following two searches: The first search is creating a table consisting of _time, idx, and b.  There are two other fields available, s for source and h for host.  However, w... See more...
Hello - I am using the following two searches: The first search is creating a table consisting of _time, idx, and b.  There are two other fields available, s for source and h for host.  However, we squash this information for performance reasons. index=_internal sourcetype=splunkd type=Usage source=*license_usage.log | table _time idx b | rename idx as index, b as bytes I have been trying to figure out a way to substitute the s & h data in the events by using a join, append, or appendcols using: | tstats count WHERE index=* sourcetype=* source=* unit_id=* by index, sourcetype, source, host, dept | table index, sourcetype, source,  host, dept Join Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | join type=inner index [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] Append Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | append [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] AppendCols Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | appendcols [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] Results: join: just fails with no data append: the _time and bytes fields are blank appendcols: leaves out the _time field - which I need to create timecharts with. The end result should look like this: _time  index   sourcetype   source   host   dept   bytes where _time, index, bytes comes from the _internal logs where index, sourcetype, source, host, dept comes from the | tstats logs Any help is greatly appreciated.  Thank you.
In order to administer ES better am trying to find the queries, searches an app makes in addition to what data models it uses. Thank u for your help in advance.
    Setup: Splunk enterprise is on a VM, everything works fine 1 workstation had a universal forwarder   Problem: I need them to talk to eachother on the stream part.   What I have done until... See more...
    Setup: Splunk enterprise is on a VM, everything works fine 1 workstation had a universal forwarder   Problem: I need them to talk to eachother on the stream part.   What I have done until now: (Splunk VM)I have added the stream app for splunk enterprise and restartet (Workstation)I have added the stream app manually to C:\SplunkUniversalForwarder\etc\apps\Splunk_TA_stream (Workstation)I have added into inputs.conf "splunk_stream_app_location =https://192.168.1.115:8000/en-us/custom/splunk_app_stream/" (Workstation)I have not added anything on the workstation to streamfwd   When I come to (Splunk VM) - I am lost:   What am I doing wrong?   Install of splunk stream into splunk enterprise (VM) was done with normal config, in other words I haven't changed where apps are installed, so everything is standard there.  I have tried to read: https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/ConfigureStreamForwarder   But I'm not getting what I'm doing wrong here.    Any suggestions please? thx
Hi All, I have Event timestamp with miliseconds: _time with Unix epoch seconds: and during search the timestamp is from _time, and I would like to have it with milliseconds.   I am u... See more...
Hi All, I have Event timestamp with miliseconds: _time with Unix epoch seconds: and during search the timestamp is from _time, and I would like to have it with milliseconds.   I am using KV_MODE in Search cluster props.conf.   [k8s:dev] KV_MODE = json     and I am trying to do changes in HF props.conf , like TIME_FIELDS, TIME_PREFIX, TIME_FORMAT, but none of them work. INDEXED_EXCTRACTION is turned OFF in HF props.conf HF props.conf   [k8s:dev] #temporary removed to fix https://jira/browse/DEVA-61153 #INDEXED_EXTRACTIONS = JSON #TIME_PREFIX = {\\"@timestamp\\":\\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N TIMESTAMP_FIELDS = @timestamp TRUNCATE = 200000 TRANSFORMS-discard_events = setnull_whitespace_indented,setnull_debug_logging SEDCMD-RemoveLogProp = s/("log":)(.*)(?="stream":)//   this is log, which is coming into the Splunk by HEC.   {"log":"{\"@timestamp\":\"2021-08-03T09:00:57.539+02:00\",\"@version\":\"1\",\"message\":     My question is: Do changes like TIME_FIELDS, TIME_PREFIX, TIME_FORMAT in HF have effect on this process when INDEXED_EXCTRACTION is not in use?   Thank you very much for your answers.
Hi,   can someone one help me with an SPL so that I can list the indexes of a datamodel. datamodel name - authentication.malware   Appreciate your help in advance.
I would like to find 1. all unique combination of actionKey, modelName, programName 2. only consider data if they have a confidence score > 70.00 Splunk Raw Log - 2021-08-04 07:35:39,069 INFO [bo... See more...
I would like to find 1. all unique combination of actionKey, modelName, programName 2. only consider data if they have a confidence score > 70.00 Splunk Raw Log - 2021-08-04 07:35:39,069 INFO [boundedElastic-87] [traceId="a4d01423048aa5de"] Request{userId='6699249',channelWise='SOCIAL', cid='1627958668279-9a93682610ee1c700c7e5d4ad01e8c76207274', sid=b8d2a070-f404-11eb-9cf4-5d474ec9ecbc, mlrecopred=[{actionKey=search, confidenceScore=83.46, modelName=model_forrest, programName=sapbased}, {actionKey=shipping_and_delivery, confidenceScore=82.94, modelName=model_forrest, programName=sapbased}, {actionKey=inventory_check, confidenceScore=65.21, modelName=model_forrest, programName=sapbased}, {actionKey=search, confidenceScore=63.46, modelName=event_handler, programName=sapbased}, {actionKey=shipping_and_delivery, confidenceScore=55.45, modelName=event_handler, programName=sapbased}], interactionId=0d6b031fdddba957, uniqueId='ed064f15d49c70ea7f540f7fe2ed2b7083e6eef8760f645f05d6600ad1208c3d'}
Hi All , i have configured alerts for the search below: index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log"... See more...
Hi All , i have configured alerts for the search below: index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log" "waiting for the process to die" Output : 8/3/21 9:38:11.000 AM dev-obiee-ux08 The server 'obips2' with process id 12714242 is no longer alive; waiting for the process to die. obips2 obiee:nodemanager:log Aug 3, 2021 5:38:11 AM EDT   but sometimes when my server process dies it restarts automatically within a 60 seconds which can be described as : index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log" "is running now" Output :  8/3/21 9:39:27.000 AM dev-obiee-ux08 The server 'obis2' is running now. obis2 obiee:nodemanager:log Aug 3, 2021 5:39:27 AM EDT   So i want to write the search query in a way so that i generate alert only if the server process dies and doesn't come up again within 120 seconds. the five fields used in the search are : _time, host ,Message ,OBIEE_Comp, sourcetype ,time   and to generate the alert the OBIEE_Comp needs to be same