All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please find my answers in bold.   Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? If... See more...
Please find my answers in bold.   Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? If the PROBLEM alert is not RECOVERED after 15minutes, we need to trigger a script. Are you only interested in whether the last problem (without a recovery) was over 15 minutes ago? YES Can you get multiple problems (without recovery) events for the same problem? Yes, I am running this on edge nodes which are limited hosts. It could be multiple hosts as well. Does the 15 minutes start when the PROBLEM event for the latest PROBLEM first occurs? YES Does the 15 minutes start when the PROBLEM event for the latest PROBLEM last occurs? NO How far back are you looking for these events? last 30 minutes How often are you looking for these events? Every 15 minutes     Can you check below snippet as well,    
The error says it all. The wql parameter needs a valid WQL query to retrieve the data. Yours is not a proper WQL query. BTW, why are you using WMI? This is one of the worst ways of getting data from... See more...
The error says it all. The wql parameter needs a valid WQL query to retrieve the data. Yours is not a proper WQL query. BTW, why are you using WMI? This is one of the worst ways of getting data from Windows.
Please clarify your requirements. Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? Ar... See more...
Please clarify your requirements. Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? Are you only interested in whether the last problem (without a recovery) was over 15 minutes ago? Can you get multiple problems (without recovery) events for the same problem? Does the 15 minutes start when the PROBLEM event for the latest PROBLEM first occurs? Does the 15 minutes start when the PROBLEM event for the latest PROBLEM last occurs? How far back are you looking for these events? How often are you looking for these events?
Or only the data manager will be the only solution for this kind of input?
Hi, I met an input issue about s3, which stays not in a aws security lake. Is that possible to use Splunk addon for aws to ingest s3 bucket with parquet formatted files?   
Thanks for the response any reference link to achieve the same would be helpful.
Hi, Apologies if I'm using the wrong terminology here. I'm trying to configure SC4S to override the destination indexes of types of sources. For example, if an event is received from a Cisco firewa... See more...
Hi, Apologies if I'm using the wrong terminology here. I'm trying to configure SC4S to override the destination indexes of types of sources. For example, if an event is received from a Cisco firewall by default it'll end up in the 'netfw' index. Instead, I want all events that would have gone to 'netfw' to go to, for example, 'site1_netfw'. I attempted to do this using the splunk_metadata.csv file but I now understand I've misinterpreted the documentation. I had used 'netfw,index,site1_netfw' but if I understand correctly, I'd actually need to have a seperate line for each key such as 'cisco_asa,index,site1_netfw'. Is that correct? Is there a way to accomplish what I want without listing each source key? Thanks
Perfect, just to fast-track the process of getting service KPI ids we can use "service_kpi_lookup" to find kpi_id and directly search using that id in saved searches to spot KPI base search. | input... See more...
Perfect, just to fast-track the process of getting service KPI ids we can use "service_kpi_lookup" to find kpi_id and directly search using that id in saved searches to spot KPI base search. | inputlookup service_kpi_lookup | search title="your_service_name"  
HI  Can you please let me know how we can combine the outputs of multiple searches into a single field??  For example :  We need a single output for the below 2 searches:  Search1 :  `macro... See more...
HI  Can you please let me know how we can combine the outputs of multiple searches into a single field??  For example :  We need a single output for the below 2 searches:  Search1 :  `macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoice_Processor) OR TERM(Esa_Final_Demand_Processor) OR TERM(Esa_Initial_Listener_Service) OR TERM(Esa_MT535_Parser) OR TERM(Esa_MT540_Parser) OR TERM(Esa_MT542_Withdrawal_Request) OR TERM(Esa_MT544_Parser) OR TERM(Esa_MT546_Parser) OR TERM(Esa_MT548_Parser) OR TERM(Esa_SCM Batch_Execution) OR TERM(Euroclear_EVIS_Border_Internal) OR TERM(EVISExternalInterface)) | stats latest(State) as Current_Status by service | where Current_Status != "Running" | stats count as count_of_stopped_services | eval status = if(count_of_stopped_services = 0 , "OK" , "NOK" ) | table status Search2 :  `macro_events_all_win_ops_esa` host="P9TWAEVV01STD" sourcetype=WinEventLog "Batch *Failed" System_Exception="*" | stats count as count_of_failed_batches | eval status = if(count_of_failed_batches = 0 , "OK" , "NOK" ) | table status Output :  If status for the search1 and status for the search2 is OK, then output should be OK.  If status for the search1 or status for the search2 is NOK, then output should be NOK.   
Please find my answers in BOLD Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just ... See more...
Please find my answers in BOLD Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last problem (without a recovery) was over 15 minutes ago? YES Can you get multiple problems (without recovery) events for the same problem i.e. do you need to know when the latest (or any) problem started (and whether it was fixed within 15 minutes)? CORRECT
{"Time":"2024-07-29T08:18:22.6471555Z","Level":"Info","Message":"Targeted Delivery","Domain":"NA","ClientDateTime":"2024-07-29T08:18:21.703Z","SecondsFromStartUp":2,"UserAgent":"Mozilla/5.0 (Linux; A... See more...
{"Time":"2024-07-29T08:18:22.6471555Z","Level":"Info","Message":"Targeted Delivery","Domain":"NA","ClientDateTime":"2024-07-29T08:18:21.703Z","SecondsFromStartUp":2,"UserAgent":"Mozilla/5.0 (Linux; Android 9; Redmi Note 8 Pro Build/PPR1.180610.011; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/127.0.6533.64 Mobile Safari/537.36 ,"Metadata":{"Environment":"Production"}}
Hello, I am currently using Splunk UF 7.2 on a Windows Server, and my UF is configured on D Drive. I am getting below error message in splunkd.log: 07-29-2024 09:07:25.343 +0100 ERROR ExecProcesso... See more...
Hello, I am currently using Splunk UF 7.2 on a Windows Server, and my UF is configured on D Drive. I am getting below error message in splunkd.log: 07-29-2024 09:07:25.343 +0100 ERROR ExecProcessor -message from ""D:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Error occurred while trying to retrieve results from a WMI query (error="Query was not syntactically valid." HRESULT=80041017) (root\cimv2: Win32_Service | SELECT Name, Caption, State, Status, StartMode, StartName, PathName Description) 07-29-2024 09:07:25.343 +0100 ERROR ExecProcessor - message from ""D:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Error occurred while trying to retrieve results from a WMI query (error="Query was not syntactically valid." HRESULT=80041017) (root\cimv2: Win32_PerfFormattedData_PerfProc_Process | SELECT Name, PSComputerName, WorkingSetPrivate, IDProcess, PercentProcessorTime)"   $SPLUNK_HOME\etc\system\local\ inputs.conf: [default] host = <hostname> [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0   wmi.conf: [settings] initial_backoff = 5 max_backoff = 20 max_retries_at_max_backoff = 2 checkpoint_sync_interval = 2 [WMI:LocalProcesses] interval = 20 wql = Win32_PerfFormattedData_PerfProc_Process | SELECT Name, PSComputerName, WorkingSetPrivate, IDProcess, PercentProcessorTime disabled = 0 [WMI:Service] interval = 86400 wql = Win32_Service | SELECT Name, Caption, State, Status, StartMode, StartName, PathName Description   Can someone please help? I am not using Splunk Add On For Windows.
Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last p... See more...
Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last problem (without a recovery) was over 15 minutes ago? Can you get multiple problems (without recovery) events for the same problem i.e. do you need to know when the latest (or any) problem started (and whether it was fixed within 15 minutes)?
Hello @yuanliu, Yes, but often I encounter events like this (just an example) 01/01/2014 11:10:38 AM LogName=Security EventCode=4625 EventType=0 ComputerName=TestY SourceName=Microsoft Windows secu... See more...
Hello @yuanliu, Yes, but often I encounter events like this (just an example) 01/01/2014 11:10:38 AM LogName=Security EventCode=4625 EventType=0 ComputerName=TestY SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=2746 Keywords=Échec de l’audit TaskCategory=Ouverture de session  OpCode=Informations Message= Echec d'ouverture de session d'un compte. Sujet : ID de sécurité : S-0 Nom du compte : - Domaine du compte : - ID d’ouverture de session : 0x0 Type d’ouverture de session : 3 Compte pour lequel l’ouverture de session a échoué : ID de sécurité : S-0 Nom du compte : Albert Domaine du compte : - When I try to display the logs in statistics, it shows one event with a user (-) and another event with a user (Albert), even though it is a single event. This happens because it extracts the account name in the "Subject" section and also in the "Logon Type" section. Regarding your question, for the conversion to XML, no, I just modified the configuration by adding 'renderXml=1'  
Hi @sarit_s6 , please share a sample of your logs so I can show you how to set the indexname. Ciao. Giuseppe
in the event i have the name of the domain, that is the only key i can use all of the logs are in one big index and i need to split it 
Hi @sarit_s6 , if you have different retention values for your events, you must use different indexes. The name of indexes are in the events or not? could you share some sample of your logs? Ciao... See more...
Hi @sarit_s6 , if you have different retention values for your events, you must use different indexes. The name of indexes are in the events or not? could you share some sample of your logs? Ciao. Giuseppe
i will try to explain it from start i have one index that contains lots of data for many domains we need to split this index so the logs for each domain will be indexes to the relevant index (which... See more...
i will try to explain it from start i have one index that contains lots of data for many domains we need to split this index so the logs for each domain will be indexes to the relevant index (which is already exist) the problem we have with keeping this large index is that we are saving the data for long retention and not all of the domains needs this data for the same time
Hi @sarit_s6 , if you want an index for each domain, you can choose the index name from the domain contained in the log, but, as I said, it isn't a good idea, also because you have to create indexes... See more...
Hi @sarit_s6 , if you want an index for each domain, you can choose the index name from the domain contained in the log, but, as I said, it isn't a good idea, also because you have to create indexes before re-routing and this action cannot be automatic! In addition, in this way, you'll have thousands of indexes, I'm repeting: it isn't a good idea" Ciao. Giuseppe
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" OR "RECOVERY" | fields host | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1... See more...
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" OR "RECOVERY" | fields host | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | eval event_type=if(match(_raw, "Subject: PROBLEM"), "PROBLEM", "RECOVERY") | lookup hostdata_lookup.csv host as src_host | table _time src_host Service State event_type cluster isvm | search cluster=*edge* AND isvm=N | sort src_host Service _time | streamstats current=f window=1 last(_time) as previous_time last(event_type) as previous_event_type by src_host Service | eval previous_time=strftime(previous_time, "%m/%d/%Y - %H:%M:%S")   Below is the output of above query,     If the CRITICAL alert is not RECOVERED after 15minutes, we need to alert. Any help is appreciated.