All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What is the location of Splunk commands like inputlookup,lookup,mvexpand,multikv,split,stats,eval,chart,tstats in splunk directory.
How can I put the current date in the where clause? For example with the below query I want to fetch all IDOCs that has been created today. I have just hard coded today's date. What should I use to p... See more...
How can I put the current date in the where clause? For example with the below query I want to fetch all IDOCs that has been created today. I have just hard coded today's date. What should I use to put the today's date condition? SELECT CREDAT, DOCNUM, STATUS, MESTYP, TIMESTAMP FROM idocs_details WHERE MESTYP = "ZPSWDMGMT" AND CREDAT = "20220324" anD STATUS = "51"
Hello, I was wondering if there is a timeline on whether the Status Indicator app would be able to be used in Dashboard Studio. I want to convert my Classic Dashboards into the new version and we h... See more...
Hello, I was wondering if there is a timeline on whether the Status Indicator app would be able to be used in Dashboard Studio. I want to convert my Classic Dashboards into the new version and we heavily use the Status Indicator app. Regards, Testy
Hello we are trying to add filter on the input of windows event log. the input conf is:   [WinEventLog://Security] disabled = 0 index = windows blacklist1 = 5145,5156 blacklist2 = EventCode=4... See more...
Hello we are trying to add filter on the input of windows event log. the input conf is:   [WinEventLog://Security] disabled = 0 index = windows blacklist1 = 5145,5156 blacklist2 = EventCode=4672 SubjectUserName="exchange\$" renderXml=true suppress_text=true supress_sourcename=true supress_keywords=true suppress_task=true suppress_opcode=true   blacklist1 is working good, but blacklist2 is not working. the target is to filter out the event id 4672 with the SubjectUserName equals to "exchange$". any ideas?   Thank you
I have a log events (each about 260 lines) related to our AWS EMR Cluster 'performance' metrics. It seems it's just a collection of output from certain Linux commands. ** If I want to parse ... See more...
I have a log events (each about 260 lines) related to our AWS EMR Cluster 'performance' metrics. It seems it's just a collection of output from certain Linux commands. ** If I want to parse e.g. like free -m, to generate some table output / timechart out of those, how would I start to parse these (assuming it's possible) ? Extract New fields, using Regular Expression didn't seem to work ...
i installed a new splunk enterprise server and configured it as deploy server as documented here: https://docs.splunk.com/Documentation/Splunk/8.2.5/Updating/Aboutdeploymentserver then i added an a... See more...
i installed a new splunk enterprise server and configured it as deploy server as documented here: https://docs.splunk.com/Documentation/Splunk/8.2.5/Updating/Aboutdeploymentserver then i added an app in $SPLUNK_HOME/etc/deployment-apps/MyApp on the 26 forwarders running ubuntu 20.04 i ran splunk set deploy-poll MyDS:8089 but only 20 of them show up in the forwarder management. when i remove one of them by clicking on "delete record", i can add a missing one by running splunk set deploy-poll MyDS:8089 again. a license is installed too and it says  'DeployServer': 'ENABLED' any ideas? thanks...
im trying to setup splunk to find suspicious traffic in incoming and outgoing traffic. right now im trying to exclude traffic that comes from places that are not suspicious (whitelist) like social me... See more...
im trying to setup splunk to find suspicious traffic in incoming and outgoing traffic. right now im trying to exclude traffic that comes from places that are not suspicious (whitelist) like social media websites, news websites, internal traffic etc. and use a blacklists to trigger alerts if it tries to connect to my IP. im not sure where to start here, is this even possible in splunk? there are existing blacklists online that use an api key to connect to for that blacklist, can i use that api key in splunk? do i download an entire database and upload it under my C: drive? i really hope you guys can help me here
Hi. How to set up a Health Rule from monday to fryday and from 06:00 to 23:59 h? I'm trying using this expresions Start: 0 6 * * 1-5 and 0 6 * * MON-FRY and 0 0 6 ? * MON-FRI End: 59 23 * * 1-5 a... See more...
Hi. How to set up a Health Rule from monday to fryday and from 06:00 to 23:59 h? I'm trying using this expresions Start: 0 6 * * 1-5 and 0 6 * * MON-FRY and 0 0 6 ? * MON-FRI End: 59 23 * * 1-5 and 59 23 * * MON-FRY But this message appears: "Error 0 6 * * MON-FRI is not a parseable cron expression" Wich is the right way to set up this? Thanks in advance for your help          
Hi, I am trying to create a table of top N categories per Region for a number of indexes. However, when I run the query on some indexes, the necessary fields exist in the events, i.e. category, re... See more...
Hi, I am trying to create a table of top N categories per Region for a number of indexes. However, when I run the query on some indexes, the necessary fields exist in the events, i.e. category, region, NodeName, host, .... yet, no table is produced in the statistics. The Statistics is as follows: And here are the respective events with necessary fields:   Why would that be? Thanks, Patrick
I have below raw string  03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumen... See more...
I have below raw string  03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumentException: No enum constant com.db.fx4capi.Fx4cApiLocal.TradeProcessingStatus.TRADE_STATUS_CANCELLED at java.lang.Enum.valueOf(Enum.java:238) ~[?:1.8.0_311] at com.db.fx4capi.Fx4cApiLocal$TradeProcessingStatus.valueOf(Fx4cApiLocal.java:10) ~[trade-22.1.1-8.jar:?] at com.db.fx4cash.trade.step.GetTradeReferenceAndStatusStep.step(GetTradeReferenceAndStatusStep.java:24) ~[step-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.executeIteration(TransactionDispatchService.java:275) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.startDispatch(TransactionDispatchService.java:673) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.run(TransactionDispatchService.java:91) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.ServiceThread.run(ServiceThread.java:36) [servicemanagement-22.1.1-8.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]     -------------------------------------------------------------------------------------------------------------------------------------- I would like to capture marked in bold. am using below command but getting partial output. index=app_events_fx4cash_uk_prod source=*STPManager-servicemanagement.20220303-100818.log* | rex field=_raw "^[^\-\n]*\-\s+(?P<Error>.+)" | table Error my output  Caught Runtime exception at service dbdiNotificationService   but my requirement is i need to capture whole error marked in bold
Hi all, I am ingesting Cisco FTD logs and currently using the Cisco ASA add-on which works fine for a lot of event messages. Unfortunately it is not working perfect as there is one event message th... See more...
Hi all, I am ingesting Cisco FTD logs and currently using the Cisco ASA add-on which works fine for a lot of event messages. Unfortunately it is not working perfect as there is one event message that is not getting recognized by the add-on. What Splunk supported method is the best for a standardized onboarding with full CIM knowledge? I do not want to use the estreamer as it is mostly creating issues and is not Splunk supported. Currently used: "Splunk_TA_cisco-asa-4.2.0"   Best  O.
  Can some one help me with Regex to get SecurityID value (in Bold) in Target Account.  Below is sample. **Event in Text form*** 03/23/2022 03:20:16 PM LogName=Security SourceName=Microsoft W... See more...
  Can some one help me with Regex to get SecurityID value (in Bold) in Target Account.  Below is sample. **Event in Text form*** 03/23/2022 03:20:16 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4738 EventType=0 Type=Information ComputerName=FRDPLIDC1.emea.loreal.intra TaskCategory=User Account Management OpCode=Info RecordNumber=386009504 Keywords=Audit Success Message=A user account was changed. Subject: Security ID: EMEA\romain.pruneaux-adm Account Name: romain.pruneaux-adm Account Domain: EMEA Logon ID: 0x31BBDCF0 Target Account: Security ID: EMEA\frclichyloftvcL05.01 Account Name: frclichyloftvcL05.01 Account Domain: EMEA
Hi, I am using the below query to bring the output in graph dashboard but this query gives Disk memory values is same . I have attached the output also. Can anyone help me  Query: index=oswin s... See more...
Hi, I am using the below query to bring the output in graph dashboard but this query gives Disk memory values is same . I have attached the output also. Can anyone help me  Query: index=oswin sourcetype="Perfmon:Processor" host=COPYCAR01 | timechart avg(cpu_load_percent) as CPU span=15min | join type=left host [| mstats avg("Memory.%_Committed_Bytes_In_Use") as Memory WHERE index=oswinperf AND host=COPYCAR01 span=15min ] | join type=left host [| mstats avg("LogicalDisk.%_Free_Space") as Diskfree WHERE index=oswinperf AND host=COPYCAR01 span=15min | eval DISK=100 - Diskfree | table _time,DISK]   Output: _time CPU DISK Memory 2022-03-24 09:00:00 55.78524325650826 57.29944034187627 11.84846114177454 2022-03-24 09:15:00 59.38775699798989 57.29944034187627 11.84846114177454 2022-03-24 09:30:00 56.71582822628451 57.29944034187627 11.84846114177454  
I am seraching as below but my join operation is not bringing results from the join for only couple of imei/records. I have 100 different imei number but only 10 of them are not returning any results... See more...
I am seraching as below but my join operation is not bringing results from the join for only couple of imei/records. I have 100 different imei number but only 10 of them are not returning any results.      index="etl_pipeline_data" environment=prd source=meta_data origin IN (device_properties, gsm_info,backend_transaction) |fields _time,tenant,origin,imei,timestamp, timestamp_device,tz_offset_cest |eval ts_device_epoch=strptime(timestamp_device,"%Y-%m-%dT%H:%M:%S.%3N")| eval ts_device=ts_device_epoch+tz_offset_cest |eval eventdate=strftime(ts_device,"%Y-%m-%d") |stats latest by tenant,origin,imei,ts_device |rename latest(*) as * |stats values(*) by tenant, imei, eventdate |join type=left imei [|loadjob savedsearch="xx:yyy:DEVICE_TRAINPASS_Report_db" ] | where imei = 352369082111082 | table imei, eventdate,train2s     As a proof of record in second serach I have tried to check the data type, there is no issues with that.  I also tried below method instead of join but it's not returning any records as well.      index="etl_pipeline_data" environment=prd source=meta_data origin IN (device_properties, gsm_info,backend_transaction) |fields _time,tenant,origin,imei,timestamp, timestamp_device,tz_offset_cest |eval ts_device_epoch=strptime(timestamp_device,"%Y-%m-%dT%H:%M:%S.%3N")| eval ts_device=ts_device_epoch+tz_offset_cest |eval eventdate=strftime(ts_device,"%Y-%m-%d") |stats latest by tenant,origin,imei,ts_device |rename latest(*) as * |stats values(*) by tenant, imei, eventdate |table imei,eventdate |append [|loadjob savedsearch="xx:yyyy:DEVICE_TRAINPASS_Report_db" | fields imei,eventdate,trains ] | where imei = 352369082111082     is there any limitation in Splunk ? Could you please help me to achive this merge operation ? 
Hi Splunkers, in my tasks I performed an exam of some already Splunk searches and one of these is about a Log4j vulnerability; in particular, I encountered the rules ESCU - Log4Shell JNDI Payload ... See more...
Hi Splunkers, in my tasks I performed an exam of some already Splunk searches and one of these is about a Log4j vulnerability; in particular, I encountered the rules ESCU - Log4Shell JNDI Payload Injection Attempt - Rule that has the following code: | from datamodel Web.Web | regex _raw="[jJnNdDiI]{4}(\:|\%3A|\/|\%2F)\w+(\:\/\/|\%3A\%2F\%2F)(\$\{.*?\}(\.)?)?" | fillnull | stats count by action, category, dest, dest_port, http_content_type, http_method, http_referrer, http_user_agent, site, src, url, url_domain, user | `log4shell_jndi_payload_injection_attempt_filter` I noted the use of _raw field and that, even if a datamodel is used, tstats command is avoided and insted of it a normal stats is in the code. So I tried to translate it in a search which use tstats, something like that: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Web by Web.log_region, Web.log_country, index, host, Web.src, Web.dest _raw | `drop_dm_object_name("Web")` | regex _raw="[jJnNdDiI]{4}(\:|\%3A|\/|\%2F)\w+(\:\/\/|\%3A\%2F\%2F)(\$\{.*?\}(\.)?)?" and I got no result; following, I removed the regex filter and noted that, when the search is performed, the _raw filed is filled with "N/D" value. Does this mean that _raw cannot be used with tstats?
I would like generate alert if no event for mentioned destination List for last 30 min. I have tried below but is not work. Index=_firewall sourcetype="f5:bigip:asm:syslog" dest_ip="10.10.10.20" ea... See more...
I would like generate alert if no event for mentioned destination List for last 30 min. I have tried below but is not work. Index=_firewall sourcetype="f5:bigip:asm:syslog" dest_ip="10.10.10.20" earliest= -30m latest=now | stats count by dest_ip,Service | eval count=if (count <=20,0,0)  
Hello,  I have a scheduled report, which is used to fill a summary index.  The report for the summary index is scheduled hourly (based on indextime) The event data is usually indexed on daily bas... See more...
Hello,  I have a scheduled report, which is used to fill a summary index.  The report for the summary index is scheduled hourly (based on indextime) The event data is usually indexed on daily basis at once (sometimes though there are multiple data ingestions of the server per day) Now the thing is, that the search is inefficient. In normal cases, this is no issue, but when a day was skipped for ingestion and the next day therefore indexes two days of event data at once, the report for the summary index will crash and there will be a gap in the summary index. To avoid data gaps, I configured the search to be durable This is the configuration regarding the time constraints of the search: This is the configuration for the durable search (Backfill method is multiple, as it is recommended for searches with transforming commands) To test, if data gaps of the summary index are recovered automatically, I stopped the dataflow of for the event index for 5 days and then indexed all the data at once, knowing, that this will lead the scheduled report to crash. Unfortunately, the durable search did not recover the data gap of the summary index. From the 5 days only a few hours were indexed into the summary index. Does someone have an idea why this is so? Thanks and best regards  
i have system column "_time" with below output  2022-03-16 11:12:18.723 i would like segregate date and time by rex command  output should be like this with new column name  Date = 2022-0... See more...
i have system column "_time" with below output  2022-03-16 11:12:18.723 i would like segregate date and time by rex command  output should be like this with new column name  Date = 2022-03-16 Time = 11:12:18
Hello I use a complex search with display results ordered by time in a table  As you can see the time period is today between 7h and 19h     | appendcols [ search `index` type=* earlie... See more...
Hello I use a complex search with display results ordered by time in a table  As you can see the time period is today between 7h and 19h     | appendcols [ search `index` type=* earliest=@d+7h latest=@d+19h | search web_domain=sharepoint.com | search web_duration_ms > 7000 | stats count as PbPerf by sam _time | timechart span=1h dc(sam) as "SHAREPOINT - Nb d'utilisateurs ayant un temps de réponse > 7 sec" ] | appendcols [ search `index` type=* earliest=@d+7h latest=@d+19h | search web_domain=laposte.sharepoint.com | timechart span=1h count as "SHAREPOINT - Nb d'erreurs" ] | where _time <now() | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI | fillnull value=0 | sort + KPI       The results are displayed like this In the KPI field, I have 10 differents items The problem I have is when I run the dashboard at 7h, I have just one or 2 items displayed without results in the span = 7h all the more that the column corresponding to 7h is not displayed! Items start to be displayed when there is a result > 0 and in this case the column "7h" is well displayed What I need is when I launch the dashboard at 7h and even if the time is less than 8h, I have all the items for the KPI column displayed and the column "7h" too with results=0 if there is no results or of course with results if there is results could you help me on this complex need please?  
Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results ... See more...
Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results Process Local-1 Local-2 Avg(Local) Remote-1 Remote-2 A 1 2 1.5 2 2 B 1 3 2 3 3 I would like to add an Avg(Local) field which gives me the average time taken by the processes running on Local-1 and Local-2. Appreciate any suggestions, thanks!