All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index=myindex NUM | where isnull(NXT) | dedup MC | eval lrm_time=[ search index=myindex2 | eventstats min(_time) as min_time | where _time=min_time | table min_time | dedup min_time | return $... See more...
index=myindex NUM | where isnull(NXT) | dedup MC | eval lrm_time=[ search index=myindex2 | eventstats min(_time) as min_time | where _time=min_time | table min_time | dedup min_time | return $min_time ] | eval formatted_time = strptime(AVAIL_TS, "%Y%m%d%H%M%S") | eval lrm_frmt_time = strptime(strftime(lrm_time, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval final_time = if(formatted_time > lrm_frmt_time, formatted_time, null) | where isnotnull(final_time) | join NUM [search index=myindex3 NUM | eval ID = printf("%01d",ID) | rename ID as NUM | stats count by NUM | eval timestatus=case(count > 5, "Complete", count == 0, "Incomplete", count > 0 AND count >= 5, "In Progress") ] | search NUM = 1 | stats count AS Total Here is the query using output will come count and that value shows using single value. file runs 4 times daily. I will create 4 panels show the NUM =1, 2, 3, 4 count. how i can show the field timestatus output is complete, incomplete and inprogress for each panel color. Thanks in Advance!
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and in... See more...
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and instances are refreshed every 45 days due to compliance and these are not exposed to internet (internal). How to create and configure proxy server here? Please guide me This is the app installed - https://splunkbase.splunk.com/app/4310
Hi All, I have a splunk query giving results in this format: Time                                                             Event 3/10/25 10:52:15.000 AM                 { [-]                ... See more...
Hi All, I have a splunk query giving results in this format: Time                                                             Event 3/10/25 10:52:15.000 AM                 { [-]                                                                          BCDA_AB_CD_01: 1                                                                          BCAD_AB__02: 0                                                                          BCDA_AB_DC: 1                                                                          BCAD_CD_02: 0                                                                         } However I want to remove the BCAD_AB__02 and BCAD_CD_02 from the output. Please help me write a splunk query to exclude these two values from the output. I tried doing  | fields - BCAD_AB__02 BCAD_CD_02  but this didn't work
Hello everyone! I came across a strange behavior. I was building a dashboard and noticed that some results look unexpected. The results are presented at the top of the screenshot. On the last row... See more...
Hello everyone! I came across a strange behavior. I was building a dashboard and noticed that some results look unexpected. The results are presented at the top of the screenshot. On the last row, you can see that ProvDuration is 0. Also, StartTime and EndTime are equal. Moreover, other fields are also equal, and it's illogical due to the search specifics. As you can see, StartTime and EndTime represent the min and max values of the _time field.   index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND EventType="AGENT_STARTUP") | rex field=_raw "VM\s+(?<MachineName>.*)$" | table _time, PoolId, MachineName, _raw | transaction MachineName startswith="Pool" endswith="startup" maxevents=2 keeporphans=false | search (PoolId="*") (MachineName="*") | search duration<=700 | stats min(duration) AS DurationMin, avg(duration) AS DurationAvg, max(duration) AS DurationMax, min(_time) AS StartTime, max(_time) AS EndTime BY PoolId | eval DurationMin = round(DurationMin, 2) | eval DurationAvg = round(DurationAvg, 2) | eval DurationMax = round(DurationMax, 2) | eval ProvDuration = round((EndTime - StartTime), 2) | eval StartTime = strftime(StartTime, "%Y-%m-%d %H:%M:%S.%3Q") | eval EndTime = strftime(EndTime, "%Y-%m-%d %H:%M:%S.%3Q") | table PoolId, DurationMin, DurationAvg, DurationMax, ProvDuration, StartTime EndTime   I decided to dig deeper and try to analyze the search more carefully. After I moved to the search through the dashboard, I found that the search results look different. The last row looks as it should be. You can see these results at the bottom of the screenshot. What could be wrong with my search, and what am I missing?
Hi Splunkers I'm looking for a way to append a column with an ID based on the value of another field. Base search gives this   index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h... See more...
Hi Splunkers I'm looking for a way to append a column with an ID based on the value of another field. Base search gives this   index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | table _time Item Count   datetime=2025-03-10T08:59:59 Item=Apple Count=8 datetime=2025-03-10T08:59:45 Item=Banana Count=2 datetime=2025-03-10T08:58:39 Item=Apple Count=5 datetime=2025-03-10T08:58:25 Item=Coconut Count=1 datetime=2025-03-10T08:57:36 Item=Banana Count=2     _time Item Count ... Apple 8 ... Banana 2 ... Apple 5 ... Coconut 1 ... Banana 2 I'd like something that gives this _time Item Count ID ... Apple 8 1 ... Banana 2 2 ... Apple 5 1 ... Coconut 1 3 ... Banana 2 2   The ID is local, based only on the results. Each unique item is numbered.   I've tried streamstats count but this doesn't give the desired results.   Thanks all  
Hi all, I have the following query: index=wineventlog source=wineventlog:security EventCode=4688 [search index=wineventlog source=wineventlog:security EventCode=4663 Object_Name="*hello.exe" Proces... See more...
Hi all, I have the following query: index=wineventlog source=wineventlog:security EventCode=4688 [search index=wineventlog source=wineventlog:security EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe" | fields Account_Name Process_Name | rename Process_Name as New_Process_Name] | table _time ComputerName Account_Name New_Process_Name Initiating_Process_Name   EventCode=4663 has a field called Object_Name, while EventCode=4688 does not. My end result is that I want to display a table to show the Object_Name column alongside with New_Process_Name and Initiating_Process_Name. The above query identifies the Account_Name and New_Process_Name (of the subsearch) and is fed into the main search to identify the Initiating_Process_Name. I want to be able to include the Object_Name from EventCode=4663 into this table as well. How can i do it?
Hi All, I need help in knowing below. There is a field named lvl, which is of type=string.  Raw Data :    { "time": "2025-03-10T06:20:29", "corr": "3hgewhrger2346324632434gjhf",... See more...
Hi All, I need help in knowing below. There is a field named lvl, which is of type=string.  Raw Data :    { "time": "2025-03-10T06:20:29", "corr": "3hgewhrger2346324632434gjhf", "dpnt": "test.dpmt", "appn": "test - appn", "lvl": "Warn", "mod": "test.mod", "tid": "171", "oper": "SetTestContext", "rslt": "Succeeded", "msg": "test msg", "inst": "test inst", "x-trace-id": "Root=1-65325bhg-test3;Sampled=1" }   Though lvl is of type string, if I try | search lvl="Warn" or lvl=Warn, it renders no result. Instead if I do  | spath  lvl and then | search lvl="Warn" or  lvl=Warn it is showing result. Whereas for other fields like dpnt which is again of type string, it is working fine with | search dpnt="test.dpmt".  I understand spath works on structured data format like json and xml but not getting what is happening in this case. Why is lvl string field not working as expected ? Please can anyone shade some light on this.  Thanks, PNV
Hi Team, Actually, I'm very interested to learn Splunk SIEM. I have downloaded trail version in my Home Lab Environment. I'm using Carbon Black EDR also.  I tried to integrate Carbon EDR to Splunk a... See more...
Hi Team, Actually, I'm very interested to learn Splunk SIEM. I have downloaded trail version in my Home Lab Environment. I'm using Carbon Black EDR also.  I tried to integrate Carbon EDR to Splunk and check EDR logs in Splunk. But I don't know. How to integrate this. Please provide step-step process to do this integration.  Kindly needful.   Thanks Satya 7013634534
As of now I am working in Splunk since 3 years. I am well versed with development and recently started working on admin part. I am still learning. I don't have any knowledge on other tools or languag... See more...
As of now I am working in Splunk since 3 years. I am well versed with development and recently started working on admin part. I am still learning. I don't have any knowledge on other tools or languages apart from Splunk. This project has requirement of AWS (our splunk instances hosted on AWS cloud). So started learning AWS. Here my doubt is along with Splunk, what tool or software can I upskill myself to get more opportunities in future? I have these thoughts... Not sure I am right.. 1. Is it good to learn DevOps because already started AWS? 2. cybersecurity and SIEM 3. SRE I have zero knowledge on coding till date. Please suggest me good path where I can upskill myself may be in next 1 yr.
Hello All, My company is using Outlook (M365 Business Standard). I want to use this Outlook as SMTP server for Splunk. Here is the information for Outlook. POP, IMAP, and SMTP settings for Outlook.c... See more...
Hello All, My company is using Outlook (M365 Business Standard). I want to use this Outlook as SMTP server for Splunk. Here is the information for Outlook. POP, IMAP, and SMTP settings for Outlook.com - Microsoft Support SMTP server name smtp-mail.outlook.com SMTP port 587 SMTP encryption STARTTLS Authentication Method OAuth2/Modern Auth As this link from Microsoft, How to set up a multifunction device or application to send email using Microsoft 365 or Office 365 | Microsoft Learn Client SMTP submission using Basic authentication in Exchange Online is scheduled for deprecation in September 2025. And for replacement, High Volume Email for Microsoft 365 is a suitable option, but it relates app password and token, which is not supported in Server settings > Email settings in Splunk. Does Splunk support SMTP server using outlook.com? Could anyone please provide guide for using outlook.com as SMTP server? FYI: Splunk Enterprise, Version: 9.1.0.2 And if we use Splunk Cloud, is it easier for using outlook.com as SMTP server?
Hi I am looking for a SSL Certificate check that does SNI. I've tried Certificates-Expiry, I get results but doesn't support SNI. Now I am trying SSL Certificate Lookup. The .py script seems to hav... See more...
Hi I am looking for a SSL Certificate check that does SNI. I've tried Certificates-Expiry, I get results but doesn't support SNI. Now I am trying SSL Certificate Lookup. The .py script seems to have provision for SNI but I am not getting any results nor any errors. Everything is empty. | makeresults  | eval url = "mywebsite.com" | lookup sslcert_lookup dest AS url What am I missing?   Cheers Andre
Hello, I currently deploy Splunk Enterprise and wanted to find out how to set a data retention policy for the index labelled as ‘Main’ within the index’s section in Splunk Enterprise. Since the ‘mai... See more...
Hello, I currently deploy Splunk Enterprise and wanted to find out how to set a data retention policy for the index labelled as ‘Main’ within the index’s section in Splunk Enterprise. Since the ‘main’ index is filling up taking most of the space on the SSD, I need to set the policy for any data in the ‘main’ index to auto delete every 3 months.  I have found the Indexes.conf file but under the settings for the ‘Main’ index there isn’t a line for frozen bucket duration time? Is it a case of me just adding the line for frozen bucket duration or max space?  Thankyou! 
We have a playbook which is making calls to SOAR REST API artifacts endpoint. We are having to pass the auth token for the REST API call in the script as plain text which isn't ideal. Given we ... See more...
We have a playbook which is making calls to SOAR REST API artifacts endpoint. We are having to pass the auth token for the REST API call in the script as plain text which isn't ideal. Given we haven't configured a vault/vault like solution (CA,Vault etc.) ,  1)We set a SOAR global environment variable and stored the value as a secret but how do we call this in our script? Have tried looking at all possible attributes in the phantom library - Documentation is next to none for this - I also tried os.environ.get but custom variables are not going to be present in it. I am able to access value of variables like NO_PROXY and it returns the respective value. Any ideas around this will help. 2)I am also trying to get the base URL for constructing the REST call  Using build_phantom_rest_url or get_base_url is  returning the URL as local address 127.0.0.1 and not our specific URL. In short, trying to access the values in the image within our custom function and haven't found a solution Making a REST API call requires auth and that option is ruled out for getting the API token. Any inputs will help. Thanks in advance.
Hello Splunkers!! We are experiencing frequent KV Store crashes, which are causing all reports to stop functioning. The error message observed is: "[ReplBatcher] out of memory." This issue is ... See more...
Hello Splunkers!! We are experiencing frequent KV Store crashes, which are causing all reports to stop functioning. The error message observed is: "[ReplBatcher] out of memory." This issue is significantly impacting our operations, as many critical reports rely on KV Store for data retrieval and processing. Please help me to get it fix.   Thanks in advance!!  
Hi I am adding the query to my dashboard and when i click on highlighted run search  it is not taking to the search one.  after index and source the keyword search is *word:is:info:name:in:data:inv... See more...
Hi I am adding the query to my dashboard and when i click on highlighted run search  it is not taking to the search one.  after index and source the keyword search is *word:is:info:name:in:data:invok*..Not sure why it is not taking to search page 
HI, im trying to create filter for network connections. But i cannot make work few tokens in the same time. I want to create OR expression. In my head its like this: 1. search should work for if i ... See more...
HI, im trying to create filter for network connections. But i cannot make work few tokens in the same time. I want to create OR expression. In my head its like this: 1. search should work for if i put process_name in textfield 2. If process_name select from dropdown along with textfield  - search for both processes.  (process_name IN ("$token1$","$token2$")) 3. If First two are not chosen, but User from User dropdown selected => Filter by User. 4. If one or two process_name tokens used and User selected - filter by chosen proces_names and then by user. I have $procname2$ token for text field and $procname2$ for dropdown of processes.  Both process_name tokens work if dropdown is selected, then search will use both dropdown token and text token. User token doesn't work at all Query for search: index=sysmon_wec AND (EventCode=22 OR event_id=22) AND ((process_name IN ("$procname$", "$procname2$") OR User IN ("$user$")) )| makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup process_name|head 100| table process_name,User,ComputerName,QueryName,QueryResults   Here is a full code of my dashboard <form version="1.1" theme="light"> <label>Find Network connections(DNS)</label> <fieldset submitButton="false"> <input type="text" token="procname2"> <label>Enter procname:eg.opera.exe</label> <default></default> </input> <input type="dropdown" token="procname" searchWhenChanged="true"> <label>Procname</label> <fieldForLabel>process_name</fieldForLabel> <fieldForValue>process_name</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) |dedup process_name|head 1000|table process_name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="User" searchWhenChanged="true"> <label>User</label> <fieldForLabel>User</fieldForLabel> <fieldForValue>User</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup User|head 1000|table User</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <table> <title>process_name</title> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) AND ((process_name IN ("$procname$", "$procname2$") OR User IN ("$user$")) )| makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup process_name|head 100| table process_name,User,ComputerName,QueryName,QueryResults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Both my work and myself are new to Splunk and I'm developing some reports and dashboards for one of our applications.  This one dashboard I am working on includes a table of events showing when so... See more...
Both my work and myself are new to Splunk and I'm developing some reports and dashboards for one of our applications.  This one dashboard I am working on includes a table of events showing when some reports are downloaded. The log file's sourcetype is _json_AuditFramework. I'm looking to include the parameters name @documentId and it's corresponding value to place into a table. Right now, the table syntax lists the parameters{}.value and when there's multiple parameters{}.name and parameters{}.values in the log, they will all show in the table.  Pending on the report, I'm including trace information as well and it's the same thing as the parameters. I haven't had luck with similar posts I found.  {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentGetById","commandText":"ref.DocumentGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":123123}],"serverIPAddress":"100.100.100.100","serverHost":"WEBSERVER","clientIPAddress":"101.101.101.101","sourceSystem":"WebSite","module":"Vendor.PRODUCT.BLL.DocumentManagement","accessDate":"2025-03-06T17:26:47.4112974-07:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager","method":"Get"}]} Show syntax highlighted host = WEBSERVER source = Logfile path sourcetype = _json_AuditFramework  
Hello,  I have a dashboard with couple of pie charts and a summary table. First Pie chart sets token 1, second token 2 and so on. I was wondering if there is any way in which I can refresh the summa... See more...
Hello,  I have a dashboard with couple of pie charts and a summary table. First Pie chart sets token 1, second token 2 and so on. I was wondering if there is any way in which I can refresh the summary table with the token values selected? In other words, if token 1 is set but token 2 is unset, then BASE SEARCH | search token1=$token1$, token2=* (since token 2 is not set yet) If token 1 and token 2 is set, then  BASE SEARCH | search token1=$token1$, token2=$token2$ (as both tokens are set. Eventually what I am trying to do is keep refreshing the dashboard based on user clicks, but if at the moment, unless user sets the token1 and token2, my dashboard panel is not showing anything. It is stuck at "waiting for user input".  
I need to create an playbook on splunk SOAR to consulting multiples IPs on abuseIP. Actualy, the abuse allows only one ip per consulting.
I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time ex... See more...
I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time extraction to modify the fields in the event as such: raw event examples   server_request_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="Fetch",} 3.14 1736873280000 server_response_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="ApiVersions",} 4.2 1736873280000   My Goal is to parse so that the event has the fields necessary for log to metric conversion. I think that means these are required (in addition to timestamp):   metric_name:server_request_byes numeric_value: 3.14 measurement:server_request_byes=3.14   I have 2 stanzas in transforms.conf which parse the metric name, and the numeric value.    [metric_name] REGEX = ^"(?P<metric_name>[a-z_-]+_[a-z_-]+\w+) FORMAT = metric_name::$metric_name [numeric_value] REGEX = ^[^ \n]* (?P<metric_value>\d+\.\d+) FORMAT = numeric_value::$metric_value   (props.conf looks like this:)   [my_log_to_metrics] # extract metric fields TRANSFORMS-metric_name = metric_name TRANSFORMS-numeric_value = numeric_value category = Log to Metrics # parse timestamp TIME_PREFIX = \}\s.*\s TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 20 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)   Currently, when I try using this sourcetype I see this error message in splunkd log:   Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values.   (And no metric data in the metrics index) I have a couple of questions: 1. Are the fields metric_name, numeric_value, and measurement required to be extracted at index time with transforms.conf for the log to metric conversion? 2. How can I combine the extracted name and value fields to create the measurement field without writing another regex statement to parse the same thing? 3. How can I parse all of the fields between the curly braces (kafka_id, principal_id, type) as dimensions for the metric, in a generic way?