All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have 2 sample logs and I need to combine them into 1 query to grab the "Accesses" values, Due to the log format differences, I have combined 2 different queries using append/sub-search. Ho... See more...
Hi, I have 2 sample logs and I need to combine them into 1 query to grab the "Accesses" values, Due to the log format differences, I have combined 2 different queries using append/sub-search. However I noticed, the subquery results will shows up in the "CE" field, but does not show up in "TC" field . Or is there another way I can do to capture the Accesses value of these 2 log? index=nginx (("SourceName=Microsoft Windows security auditing" AND EventCode=4663 AND "Message=An attempt was made to access an object" AND Object_Name="C:\\Program Files\\nginx-1.18.0\\conf\\*.conf" AND ("Accesses:*")) OR (SourceName=Microsoft-Windows-TerminalServices-LocalSessionManager AND EventCode=21)) | eval Date = strftime(_time, "%Y-%d-%m") | eval Hostname = upper(host) | eval Date = strftime(_time, "%Y-%d-%m") | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | rex field=Message "Accesses:\s+(?P<Action>[^<]+)"\s+Access | append [ search index=nginx ((EventCode=4656 Object_Name="C:\\Program Files\\nginx-1.18.0\\conf\\*.conf" Process_Name=C:\\Windows\\explorer.exe) OR (SourceName=Microsoft-Windows-TerminalServices-LocalSessionManager AND EventCode=21)) | rex field=Message max_match=0 "Accesses:\s+(?P<Action>[^>]+)"\s+Access\sReasons] | rex field=Action mode=sed "s/\s{2,}/\n/g" | strcat Action "-> " Object_Name CE | eval TC = mvzip(Time,'CE') | stats values(TC) as TC latest(Source_Network_Address) as "IP Address" by Date index Hostname | where isnotnull(TC) | mvexpand TC | makemv TC delim="," | eval Time=mvindex(TC, 0) | eval "Events"=mvindex(TC, 1) | fields Time Events Hostname "IP Address" | where Events' != "-> " Thank you
Below is an example of what I want to accomplish:   If x="example" and y="success", return true for this segment.  If x="example2" and y="success", return true for this segment. If x="example3" a... See more...
Below is an example of what I want to accomplish:   If x="example" and y="success", return true for this segment.  If x="example2" and y="success", return true for this segment. If x="example3" and y="success", return true for this segment. If all are three statements = true, return true. If not all three = true, return false.  
Hi,  Please could you help me install using the MSI. I've tried to run as admin as well as normal.  Its status bar goes up to 80% and then an error occurs and then it rolls back status and I can se... See more...
Hi,  Please could you help me install using the MSI. I've tried to run as admin as well as normal.  Its status bar goes up to 80% and then an error occurs and then it rolls back status and I can see it deleting files from the Splunk directory. I've tried to troubleshoot online however haven't been able to find solution to the problem.    Thanks. 
log1 : user_id , status=interrupt, log2 : user_id, status = success Hi All, I want to find user_ids that failed due to an interrupt after initial success state for a period last 30days. I tried t... See more...
log1 : user_id , status=interrupt, log2 : user_id, status = success Hi All, I want to find user_ids that failed due to an interrupt after initial success state for a period last 30days. I tried transaction command, the query runs slow and same with subsearch. In Stats, i am not able to figure out how to make sure that I search only "interrupt" that comes after the success and not the ones that occurred before it. In my base query i take only the logs with two states. Since I run for last 30days, i'm not able to make sure if the interrupt has occurred after the success or it occurred separately. Time duration between these two events mostly wont be more than 30s. What I've right now is : time_duration not more than 30s, User_id and interrupt should come after success. Need some advice on how to achieve this with Stats command. with transaction i've tried something like this. base_query status="success" OR status="interrupt"  | transaction user_id startswith=(status="success") endswith=(status="interrupt") maxspan=30s | stats count by user_id I had checked other answers with respect to Transaction command, but i did not find those as satisfactory to what I'm looking for.  
Hello, Hoping to get a hint on where to go with this; Use Case: I am attempting to import files from a exported .evtx file from a external Windows host as per:  Splunk Docs for Importing Windows... See more...
Hello, Hoping to get a hint on where to go with this; Use Case: I am attempting to import files from a exported .evtx file from a external Windows host as per:  Splunk Docs for Importing Windows Event Log Files The inputs.conf has been written close to the following.   [monitor://D:\SplunkLogImport\awesome_hostname\preprocess-winevt\*.evtx] disabled = 0 sourcetype = preprocess-winevt host = awesome_hostname index = awesome_index crcSalt = <SOURCE> move_policy = sinkhole evt_resolve_ad_obj = 0   The challenge here is the logs are from a server in another domain from another network entirely and I have no access to a domain controller. As per:  Splunk Docs for Monitor Windows EventLog Data  I am recieving an error (as expected) but I'm not seeing any data come in.  I am not concerned with having all the data resolved and seeking to simply input this data.    Question: Any thoughts on how to blindly import the event logs knowing full well we're not going to get SID/GID object resolution?  What is required to tell the forwarder not to bind to the domain?  I've attempted  the following with no results.   evt_resolve_ad_obj = 0   I would appreciate any guidance that may exist on this subject. Details: Host is running Splunk Universal Forwarder v 7.3 on Windows 2012.  Source data is from a Windows 2008 R2 server.  The Error: (thousands of these) INFO WinEventLogChannel - WinEventLogChannel::getEventsNew (2000): No bindToDc
Hey y'all, a quick question for the Splunk community and perhaps the developers. How does Splunk or any SIEM solution recognize a device name from raw data logs? I've been scratching my head trying ... See more...
Hey y'all, a quick question for the Splunk community and perhaps the developers. How does Splunk or any SIEM solution recognize a device name from raw data logs? I've been scratching my head trying to figure that out.   
I am noticing for some of our events our playbooks run multiple times on the same event. How can I go about keeping that from happening? The second run of the playbook is generating a lot of error no... See more...
I am noticing for some of our events our playbooks run multiple times on the same event. How can I go about keeping that from happening? The second run of the playbook is generating a lot of error notifications for us. Ex We get an alert, Phantom ingest the email, our move processed email playbook runs, our alert playbooks run and then our move processed email playbook runs again causing us to receive an error notification (by design). How can this be stopped?
Hi Splunk world I am new to splunk Could you please help me get started on how to monitor the certificates on the servers that are in monitoring in Splunk? Thanks
Hi all, So my Splunk architecture consists of just 1 Heavy Forwarder and Splunk Cloud.  I have some logs that do not go through the HF (straight to Splunk Cloud) that I want to drop based on their I... See more...
Hi all, So my Splunk architecture consists of just 1 Heavy Forwarder and Splunk Cloud.  I have some logs that do not go through the HF (straight to Splunk Cloud) that I want to drop based on their IP and to do so was wanting to modify props and transforms on the Cloud (like you would do on a forwarder to drop logs). Support is telling me in order to do this I should make a custom app and modify props and transforms there and not giving me much more than that. Has anyone done something like this and what did you end up doing?  Thanks!
Hi There, How do i Exclude Source IP and Destination IP from results if they belong to same private ip range? For e.g. in the results as shown below src_ip dest_ip count 10.0.0.1 10.10.0.1... See more...
Hi There, How do i Exclude Source IP and Destination IP from results if they belong to same private ip range? For e.g. in the results as shown below src_ip dest_ip count 10.0.0.1 10.10.0.1 1 10.0.0.1 192.168.0.1 1   I need to exclude the first row in the statistics as they belong to same private ip range but want to keep the second row.
Dear Splunkers, Hello. I am new to Splunk and have task to create alert for following scenario: Each minute we receive about 100K events and need to find out events where field value is greater than... See more...
Dear Splunkers, Hello. I am new to Splunk and have task to create alert for following scenario: Each minute we receive about 100K events and need to find out events where field value is greater than 180. Also we have 2 eval fields (current value and previous value) After each event - current_value = previous_value +(-)1 based on value (greater or less than 180). Also when end of file is reached - next file should start with values of current and previous results. I have created following search but it doesn't work well: index=OurIndex | eval alertType = "" | eval threshold = 180 | eval severity = "low" | eval maxLevel = 5 | eval alertLevel = 1 | eval clearLevel = 0 | eval startTime = round(relative_time(_time, "-0s@s")) | eval processedTime = now() | eval metric = "dl_dmax" | eval metricValue = dl_dmax | streamstats current=f window=1 last(dl_dmax) as lastDmax, last(stateLevel) as lastStateLevel by _time | eval stateLevel = if(isnull(lastStateLevel), 0, lastStateLevel) | eval lastLevel = if(lastDmax>threshold, case(stateLevel<maxLevel, stateLevel+1, stateLevel==maxLevel, maxLevel), case(stateLevel!=0, stateLevel-1, stateLevel=0, 0)) | eval stateLevel = if(metricValue>threshold, case(lastLevel<maxLevel, lastLevel+1, lastLevel==maxLevel, maxLevel), case(lastLevel!=0, lastLevel-1, lastLevel=0, 0)) | table snmpid, objectId, objectName, objectType,  alertLevel, lastLevel, stateLevel  For now stateLevel is never greater than 2 and lastLevel is not greater than 1. Can you please advise on how to modify my search to make it working? Thanks in advance!
The Authentication Requirement field from Azure is not showing up in Splunk cloud. According to https://docs.microsoft.com/en-us/graph/api/signin-list?view=graph-rest-beta&tabs=http, I need to add bo... See more...
The Authentication Requirement field from Azure is not showing up in Splunk cloud. According to https://docs.microsoft.com/en-us/graph/api/signin-list?view=graph-rest-beta&tabs=http, I need to add both AuditLog.Read.All & Directory.Read.All to my API permissions. I did that but I am still not seeing all the fields listed. Specifically, I am looking for the Authentication Requirement field. Does anyone know how to get fields added or what I may be missing?
Hello, I have question about [thruput] setting on UF and internal Splunk log: I did some tests with Splunk UF - I needed to simulate a problem with the tcpout queue and therefore I reduced the valu... See more...
Hello, I have question about [thruput] setting on UF and internal Splunk log: I did some tests with Splunk UF - I needed to simulate a problem with the tcpout queue and therefore I reduced the value of the parameter [thruput] maxKBps = <integer> in the limits.conf file to low KBps values (eg 3KBps). UF is set to send its internal logs to IDX. However, I noticed that with such a low value of this parameter, UF stopped sending its internal metric logs (ie the contents of the $ SPLUNK_HOME/var/log/splunk/metrics.log file) to IDX. Logs were further written to the $ SPLUNK_HOME/var/log/splunk/metrics.log file, but were not sent to IDX. Is this normal behavior? It looks as if there is a mechanism that prioritizes the data collected over internal Splunk logs and suppresses the sending of internal Splunk logs to IDX - is it really so, is there such a mechanism? I tried to find something about it in the documentation, but without success. Thank you in advance for any information. Best regards Lukas Mecir
Hi all, It would be great if anyone have a solution for my timechart xaxis issue.  Thanks in advance for you time and effort. Here is the problem: A timechart (5 line charts) with span=30min earli... See more...
Hi all, It would be great if anyone have a solution for my timechart xaxis issue.  Thanks in advance for you time and effort. Here is the problem: A timechart (5 line charts) with span=30min earliest=-180d@d. _time transforms to | eval Date=strftime(Date, "%d/%m/%y %H:%M") . Since i dont use the _time var, so timechart does not display the Xaxis time interval ( 1st pic). If i zoom in, the Xaxis tecxt will be visible( 2nd pic). Is this possible to have xaxis time interval text ( by month 12 01 02 03 04 05 06). Because without it, user have to move the mouse to read lien chart by the tooltip box.  restriction: I need to respect the format "%d/%m/%y %H:%M" so i could not use default _time ( which is available in SPLUNK) 1st : without zoom, need to depend on tooltip to know the Date 1st 2nd: zoom, xaxis lable become visible What i looking to acheive by either ccs or SPL( dont think it is possible) xaxis time interval text ( by month %m or %d/%m/%y %H:%M could be great) thanks
Hello Earlier when we interrogate Error or Stall transactions, we could get this view (this is from 25-May-21). We are trying to get this view since yesterday and it's not available. What could... See more...
Hello Earlier when we interrogate Error or Stall transactions, we could get this view (this is from 25-May-21). We are trying to get this view since yesterday and it's not available. What could be the reason?  Any license issue or any update in the dashboards? Thanks Regards Siva
I've use case that I need to filter data by source field, that always changes. in the transforms.conf I use: [foo] REGEX = MY REGEX DEST_KEY = queue FORMAT = nullQueue  and in the props.conf I ... See more...
I've use case that I need to filter data by source field, that always changes. in the transforms.conf I use: [foo] REGEX = MY REGEX DEST_KEY = queue FORMAT = nullQueue  and in the props.conf I use: [source::process_events] TRANSFORMS-01= foo The source always contains process_events and there is more data like date and info that changed.  any way its possible to filter data by source wildcard? thanks!
Hello, We need to develop a Correlation Search to implement this algorithm : If a specific custom event (here tagged as index="custom_app" categorie="custom_log") occures for one user we trigger an... See more...
Hello, We need to develop a Correlation Search to implement this algorithm : If a specific custom event (here tagged as index="custom_app" categorie="custom_log") occures for one user we trigger an alert if there is no access apache log for the same user. I have tried the following correlation search with trigger conditions : Trigger alert when Number of Results is equal to 0. index="linux_apache" sourcetype="apache:access:kv" [search index="custom_app" categorie="custom_log" | top limit=1 user | table user] | table user | dedup user   Thanks by advance for your help regarding this topic
Hi  Everyone, I had been using map command on a set of few tens of entries . Basically it gets Busername field and searches customer's status using curl command. But now the data set is getting bi... See more...
Hi  Everyone, I had been using map command on a set of few tens of entries . Basically it gets Busername field and searches customer's status using curl command. But now the data set is getting bigger (might get to 1-2k) and i could sense map command would be too inefficient a way here. What alternate way can i use instead of map command? I am not sure if i can use nested search in this case.  | inputlookup Data_Topology where "location"="WINDSOR" | table BUsername | map maxsearches=100 search="| curl method=get uri=https://mdoss-api.****.corp.com/v2/customers/$BUsername$ | spath input=curl_message | fields - curl* **some data**=*"  
I've created a lookup file with 2 columns like this, basically a lookup file containing list of search queries.   Name        |       Value Query1     | index=*xyz*  field1="fasdasdasdadasdasd" Q... See more...
I've created a lookup file with 2 columns like this, basically a lookup file containing list of search queries.   Name        |       Value Query1     | index=*xyz*  field1="fasdasdasdadasdasd" Query2     | index=*abc*  field2 = "qweqweqweqweqwe" Query3     | index=*pqr*  field3 = "zxzxczxczczx"   I want to get the count of each query using inputlookup and map command, in such a way that it gives 0 result to and not omit the any query if count is 0, like this -   Name        |       Count Query1     | 200 Query2     | 0 Query3     |4500   Could someone help please ?
i have 3 search heads and its on cluster.i just done rolling restart today morning i started seeing below error search mamber.using 8.2v ERROR SHCMasterHTTPProxy [1338 SHPHeartbeatThread] - Low Leve... See more...
i have 3 search heads and its on cluster.i just done rolling restart today morning i started seeing below error search mamber.using 8.2v ERROR SHCMasterHTTPProxy [1338 SHPHeartbeatThread] - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/members captain=blt14788005:8089 rc=0 actual_response_code=500 expected_response_code=201 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">Cannot add peer=10.45.10.74 mgmtport=8089 (reason: removeOldPeer peer=3F2BAA5B-1792-4FE6-9393-499B8DAF8D33, serverName=blt14788004, hostport=10.45.10.74:8089, but found different peer=FD52AB8F-AF38-47E3-BDB6-C16D42E8AFB4 with serverName=blt14788004 and hostport=10.45.10.72:8089 already registered and UP)</msg>\n </messages>\n</response>\n"