All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word ... See more...
how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word found for that min alert will be triggered. all the vms and server is working fine. every week atleast once getting this alert.
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d c... See more...
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d count by audit   However, I'd like to visualize this data as percentages instead. Is it possible to modify the search to display the percentage of compliant and non-compliant events on top of each bar? Thanks in advance for your help!
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions y... See more...
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions you may have. Thank you in advance for your help. Source=bluecoat
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a ... See more...
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a certain sourcetype, line-breaking is off, and when looking for the sourcetype on the indexers (via support) and on the on-prem HF where the data is ingested and I cannot find the props for this sourcetype. So I wonder is it possible to have no sourcetype anywhere for a particular source?
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in inte... See more...
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in interesting fields.As well as i attched the screenshot of Splunk UI results in the attachment. Please guide me what i need to change in the setting? [demo] KEEP_EMPTY_VALS = false KV_MODE = xml LINE_BREAKER = <\/eqtext:EquipmentEvent>() MAX_TIMESTAMP_LOOKAHEAD = 24 NO_BINARY_CHECK = true SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3f%Z TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TRUNCATE = 100000000 category = Custom disabled = false pulldown_type = true FIELDALIAS-fields_scada_xml = "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.AreaID" AS area "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ElementID" AS element "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.EquipmentID" AS equipment "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ZoneID" AS zone "eqtext:EquipmentEvent.eqtext:ID.eqtext:Description" AS description "eqtext:EquipmentEvent.eqtext:ID.eqtext:MIS_Address" AS mis_address "eqtext:EquipmentEvent.eqtext:Detail.State" AS state "eqtext:EquipmentEvent.eqtext:Detail.eqtext:EventTime" AS event_time "eqtext:EquipmentEvent.eqtext:Detail.eqtext:MsgNr" AS msg_nr "eqtext:EquipmentEvent.eqtext:Detail.eqtext:OperatorID" AS operator_id "eqtext:EquipmentEvent.eqtext:Detail.ErrorType" AS error_type "eqtext:EquipmentEvent.eqtext:Detail.Severity" AS severity ================================= <eqtext:EquipmentEvent xmlns:eqtext="http://vanderlande.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://vanderlande.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://vanderlande.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>8503</AreaID><ZoneID>3</ZoneID><EquipmentID>3</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> LMS not healthy</eqtext:Description><eqtext:MIS_Address>0.3</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-04-02T21:09:38.337Z</eqtext:EventTime><eqtext:MsgNr>4657614997395580315</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>    
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my re... See more...
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my result includes total of 0,1,2,3 etc.. when i go above, 3 days for example - I loose all the data about the 0 value and my results start with 1,2,3 etc..  Anyone could explain this? am I doing something wrong or could this be a bug somewhere? 
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM ... See more...
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM system. Functionality in limited mode. Area: System Management."
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft... See more...
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational index = sysmon is this the correct config ?
I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 req... See more...
I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 requests in a row go in error?
Hi Team, @ITWhisperer @gcusello  I am parsing the CSV data to Splunk, testing in dev windows machine from UF. This is the sample csv data: Subscription Name  Resource Group Name  Key Vault Na... See more...
Hi Team, @ITWhisperer @gcusello  I am parsing the CSV data to Splunk, testing in dev windows machine from UF. This is the sample csv data: Subscription Name  Resource Group Name  Key Vault Name  Secret Name  Expiration Date  Months SUB-dully  core-auto  core-auto  core-auto-cert  2022-07-28 -21 SUB-gully  core-auto  core-auto  core-auto-cert  2022-07-28 -21 SUB-pally  core-auto  core-auto  core-auto-cert  2022-09-01 -20   The output i am getting, all events in single event. I created inputs.conf, sourcetype where the sourcetype configurations are Can anyone help me why is it's not breaking.  
Taking a Udemy Splunk introductory course module about macros. The string works fine in Search, but not as a macro named fileinfo - get the above error.  index=web | eval megabytes=bytes/1024/10... See more...
Taking a Udemy Splunk introductory course module about macros. The string works fine in Search, but not as a macro named fileinfo - get the above error.  index=web | eval megabytes=bytes/1024/1024 | stats sum(megabytes) as "Megs" by file | sort – Megs  
Attempting to address an issue where some of my org's larger playbooks refuse to load in the SOAR playbook editor . Support as usual disappoints by throwing their hands up in the air referencing "Bes... See more...
Attempting to address an issue where some of my org's larger playbooks refuse to load in the SOAR playbook editor . Support as usual disappoints by throwing their hands up in the air referencing "Best Practices" and demanding we reduce the size of our playbooks. When I ask them to back their position by asking for documentation there is none. We're finding ourselves having to disable automations and workflows simply because we can't even load these workflows in the editor to perform routine fixes. Even after escalating to our account team, we're still getting the "reduce the size of your playbooks answer". Their workaround for not being able to load the playbook in the current version to rewrite them is to to rebuild a SOAR enviornment in 5.x so we can make these edits 🤬. Has anyone else experienced this? Is the only resolution rewriting playbooks to break them up? Version 6.1 Attempted the newest release, in a lab, no improvement.
I already have the Salesforce add-on for Splunk. Does Salesforce have an email source that I can tap on to get those emails? Has anyone done it successfully?
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present ... See more...
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present in EFG log but it has many other logs coming from different apis also . I only want the log which is having the correlationId same in ABC then it should check the other log .And then I will use this regular expression to get the fields, like spath. Currently I am using this query  index=whcrm ( sourcetype=xl-sfdcapi ("Create / Update Consents for gcid" OR "Failure while Create / Update Consents for gcid" OR "Create / Update Consents done") ) OR ( sourcetype=sl-gcdm-api ("Error in sync-consent-dataFlow:") ) | rename properties.correlationId as correlationId | rex field=_raw "correlationId: (?<correlationId>[^\s]+)" | eval is_success=if(match(_raw, "Create / Update Consents done"), 1, 0) | eval is_failed=if(match(_raw, "Failure while Create / Update Consents for gcid"), 1, 0) | eval is_error=if(match(_raw, "Error in sync-consent-dataFlow:"), 1, 0) | stats sum(is_success) as Success_Count, sum(is_failed) as Failed_Count, | eval Total_Consents = Success_Count + Failed_Count | table Total_Consents, Success_Count, Failed_Count first one is the ABC log and second is the EFG also I want to use this regular expression in between to get the details  | rex field=message "(?<json_ext>\{[\w\W]*\})" | spath input=json_ext Or there can be any other way to write the query and get the counts please help . Thanks in Advance
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:... See more...
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:00 TeamEnd 18:00:00 How do I achieve this using regex or concat expression in splunk. Please suggest.
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the ... See more...
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the splunk search option I am getting some like: {"timestamp: 2024-04-02T22:42:19.655Z sequence: 735 blablaclasname: com.rr.jj.eee.rrr anotherblablaclasnameName: com.rr.rr.rrrr.rrr level: ERRROR exceptionMessage: blablabc .... } - When I received by email the file using the same query I'm getting something like: {"timestamp: 2024-04-02T22:42:19.655Z\nsequence: 735\nblablaclasname: com.rr.jj.eee.rrr\nanotherblablaclasnameName: com.rr.rr.rrrr.rrr\nlevel: ERRROR\n\nexceptionMessage: blablabc\n....} *.conf file I am seeing: LINE_BREAKER = \}(\,?[\r\n]+)\{? Regards  
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? ... See more...
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? root@raz-spldevsh:/opt/splunk/etc/apps# tail -n5000 /opt/splunk/var/log/splunk/splunkd.log |grep -E "ERROR" 04-05-2024 11:26:08.663 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.686 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.699 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py Splunk 9.0.3 App version: 4.5.1    
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API:... See more...
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") | spath content.payload{} | mvexpand content.payload{} |stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | rex field=message max_match=0 "Expense Extract Process started for (?<FileName>[^\n]+)" | rex field=message max_match=0 "API: START: /v1/expense/extract/ondemand/(?<OtherRegion>[^\/]+)\/(?<OnDemandFileName>\S+)" | eval OtherRegion=upper(OtherRegion) | eval OnDemandFileName=rtrim(OnDemandFileName,"Job") | eval "FileName/JobName"= coalesce(OnDemandFileName,JobName) | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled") | eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('tracePoint',"%EXCEPTION%"),"ERROR") | eval Region= coalesce(Region,OtherRegion) | eval OracleRequestId=mvappend("RequestId:",RequestID,"ImpConReqid:",ImpConReqId) | eval Response= coalesce(message,error,errorMessage) | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","NotMatch") | rename Logon_Time as Timestamp | table Status JobType Response ArchivedFileName ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs priority match | where JobType!=" " | search Status="*" In the response field i want to show only.I dont care about the rest : PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU Status Response ArchiveFileName correlationId Success API: START: /v1/expense/extract After calling flow archive-ConcurExpenseFile-SubFlow Before calling flow archive-ConcurExpenseFile-SubFlow Calling s-ebs-api for AP Import process Concur AP/GL File/s Process Status Concur Ondemand Started Expense Extract Processing Starts Extract has no GL Lines to Import into Oracle PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL File/s Process Status - APAC Records Count Validation Passed EMEA_concur_expenses_ 49cde170-e057-11ee-8125-de5fb5
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that coul... See more...
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that could be wrong but I am really not seeing any app logging. Is there anyway to configure that? Does this app have a FIPS incompatibility?  The only thing I am finding are these logs in splunkd.log: ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/assessment_data.py" obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/vuln_data.py" json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Based on drop down list value how to change search string in each panel eg  for panel to load the search string will vary as below: index=test (source="/test/log/path/test1.log" $param1$ c="$p... See more...
Based on drop down list value how to change search string in each panel eg  for panel to load the search string will vary as below: index=test (source="/test/log/path/test1.log" $param1$ c="$param2$" $dropdownlistvalue1$ $dropdownlistvalue1$) As log path is different all my params vary. so how can i change index based of drop down list value?