All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please find my answers in BOLD Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just ... See more...
Please find my answers in BOLD Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last problem (without a recovery) was over 15 minutes ago? YES Can you get multiple problems (without recovery) events for the same problem i.e. do you need to know when the latest (or any) problem started (and whether it was fixed within 15 minutes)? CORRECT
{"Time":"2024-07-29T08:18:22.6471555Z","Level":"Info","Message":"Targeted Delivery","Domain":"NA","ClientDateTime":"2024-07-29T08:18:21.703Z","SecondsFromStartUp":2,"UserAgent":"Mozilla/5.0 (Linux; A... See more...
{"Time":"2024-07-29T08:18:22.6471555Z","Level":"Info","Message":"Targeted Delivery","Domain":"NA","ClientDateTime":"2024-07-29T08:18:21.703Z","SecondsFromStartUp":2,"UserAgent":"Mozilla/5.0 (Linux; Android 9; Redmi Note 8 Pro Build/PPR1.180610.011; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/127.0.6533.64 Mobile Safari/537.36 ,"Metadata":{"Environment":"Production"}}
Hello, I am currently using Splunk UF 7.2 on a Windows Server, and my UF is configured on D Drive. I am getting below error message in splunkd.log: 07-29-2024 09:07:25.343 +0100 ERROR ExecProcesso... See more...
Hello, I am currently using Splunk UF 7.2 on a Windows Server, and my UF is configured on D Drive. I am getting below error message in splunkd.log: 07-29-2024 09:07:25.343 +0100 ERROR ExecProcessor -message from ""D:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Error occurred while trying to retrieve results from a WMI query (error="Query was not syntactically valid." HRESULT=80041017) (root\cimv2: Win32_Service | SELECT Name, Caption, State, Status, StartMode, StartName, PathName Description) 07-29-2024 09:07:25.343 +0100 ERROR ExecProcessor - message from ""D:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Error occurred while trying to retrieve results from a WMI query (error="Query was not syntactically valid." HRESULT=80041017) (root\cimv2: Win32_PerfFormattedData_PerfProc_Process | SELECT Name, PSComputerName, WorkingSetPrivate, IDProcess, PercentProcessorTime)"   $SPLUNK_HOME\etc\system\local\ inputs.conf: [default] host = <hostname> [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0   wmi.conf: [settings] initial_backoff = 5 max_backoff = 20 max_retries_at_max_backoff = 2 checkpoint_sync_interval = 2 [WMI:LocalProcesses] interval = 20 wql = Win32_PerfFormattedData_PerfProc_Process | SELECT Name, PSComputerName, WorkingSetPrivate, IDProcess, PercentProcessorTime disabled = 0 [WMI:Service] interval = 86400 wql = Win32_Service | SELECT Name, Caption, State, Status, StartMode, StartName, PathName Description   Can someone please help? I am not using Splunk Add On For Windows.
Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last p... See more...
Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or are you just interested in whether the last problem (without a recovery) was over 15 minutes ago? Can you get multiple problems (without recovery) events for the same problem i.e. do you need to know when the latest (or any) problem started (and whether it was fixed within 15 minutes)?
Hello @yuanliu, Yes, but often I encounter events like this (just an example) 01/01/2014 11:10:38 AM LogName=Security EventCode=4625 EventType=0 ComputerName=TestY SourceName=Microsoft Windows secu... See more...
Hello @yuanliu, Yes, but often I encounter events like this (just an example) 01/01/2014 11:10:38 AM LogName=Security EventCode=4625 EventType=0 ComputerName=TestY SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=2746 Keywords=Échec de l’audit TaskCategory=Ouverture de session  OpCode=Informations Message= Echec d'ouverture de session d'un compte. Sujet : ID de sécurité : S-0 Nom du compte : - Domaine du compte : - ID d’ouverture de session : 0x0 Type d’ouverture de session : 3 Compte pour lequel l’ouverture de session a échoué : ID de sécurité : S-0 Nom du compte : Albert Domaine du compte : - When I try to display the logs in statistics, it shows one event with a user (-) and another event with a user (Albert), even though it is a single event. This happens because it extracts the account name in the "Subject" section and also in the "Logon Type" section. Regarding your question, for the conversion to XML, no, I just modified the configuration by adding 'renderXml=1'  
Hi @sarit_s6 , please share a sample of your logs so I can show you how to set the indexname. Ciao. Giuseppe
in the event i have the name of the domain, that is the only key i can use all of the logs are in one big index and i need to split it 
Hi @sarit_s6 , if you have different retention values for your events, you must use different indexes. The name of indexes are in the events or not? could you share some sample of your logs? Ciao... See more...
Hi @sarit_s6 , if you have different retention values for your events, you must use different indexes. The name of indexes are in the events or not? could you share some sample of your logs? Ciao. Giuseppe
i will try to explain it from start i have one index that contains lots of data for many domains we need to split this index so the logs for each domain will be indexes to the relevant index (which... See more...
i will try to explain it from start i have one index that contains lots of data for many domains we need to split this index so the logs for each domain will be indexes to the relevant index (which is already exist) the problem we have with keeping this large index is that we are saving the data for long retention and not all of the domains needs this data for the same time
Hi @sarit_s6 , if you want an index for each domain, you can choose the index name from the domain contained in the log, but, as I said, it isn't a good idea, also because you have to create indexes... See more...
Hi @sarit_s6 , if you want an index for each domain, you can choose the index name from the domain contained in the log, but, as I said, it isn't a good idea, also because you have to create indexes before re-routing and this action cannot be automatic! In addition, in this way, you'll have thousands of indexes, I'm repeting: it isn't a good idea" Ciao. Giuseppe
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" OR "RECOVERY" | fields host | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1... See more...
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" OR "RECOVERY" | fields host | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | eval event_type=if(match(_raw, "Subject: PROBLEM"), "PROBLEM", "RECOVERY") | lookup hostdata_lookup.csv host as src_host | table _time src_host Service State event_type cluster isvm | search cluster=*edge* AND isvm=N | sort src_host Service _time | streamstats current=f window=1 last(_time) as previous_time last(event_type) as previous_event_type by src_host Service | eval previous_time=strftime(previous_time, "%m/%d/%Y - %H:%M:%S")   Below is the output of above query,     If the CRITICAL alert is not RECOVERED after 15minutes, we need to alert. Any help is appreciated.  
the regex is just an example, its not the real one since the regex is not the issue here the purpose of this step is because we need to separate the logs per domain so my question is if the props.c... See more...
the regex is just an example, its not the real one since the regex is not the issue here the purpose of this step is because we need to separate the logs per domain so my question is if the props.conf example is the right way or maybe there is different way to do it ?
each index is for different domain  we want to split the logs per domain
I know this post is super old but just for the sake of having another possible solution written down somewhere, the following has solved it for me (based on what was discussed in this thread): keep ... See more...
I know this post is super old but just for the sake of having another possible solution written down somewhere, the following has solved it for me (based on what was discussed in this thread): keep the sourcetype in the universal forwarder's app props.conf with INDEXED_EXTRACTIONS = json [HurricaneMTA_Advanced] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured pulldown_type = true TIMESTAMP_FIELDS = Timestamp TIME_FORMAT = %FT%T.%7N%:z SHOULD_LINEMERGE = true KV_MODE = none disabled = false   Add a sourcetype in the props.conf in some app on the search head with KV_MODE set to none:   [HurricaneMTA_Advanced] KV_MODE = none disabled = false
Hi @sarit_s6 , as also @PickleRick and @marnall said, the only resons to have different indexes are different retentions and grant accesses, even if you have a big index: dimension isn't an issue fo... See more...
Hi @sarit_s6 , as also @PickleRick and @marnall said, the only resons to have different indexes are different retentions and grant accesses, even if you have a big index: dimension isn't an issue for the indexes. Remember that Splunk isn't a database and that indexes aren't tables! Event if also following your bad idea (bad because you need to create and manage many indexes without any apparent reason), it's possible to dinamically assign the index name extracting the index name from the logs. In addition your regex it's very heavy for your system (you have many groups .* in your regex and one of them at the begininning of the regex) and you're giving a completely unuseful overload to your system. You can check the performaces of your regex in regex101.com. Ciao. Giuseppe
This long dispatch phase means that it is taking very long for Splunk to spawn search to your indexer. At first glance it would suggest network problems (are your both components on prem or in cloud?... See more...
This long dispatch phase means that it is taking very long for Splunk to spawn search to your indexer. At first glance it would suggest network problems (are your both components on prem or in cloud? If in cloud are they in the same cloud zone?) or some DNS issues (so that some timeouts must happen).
This issue first occurred intermittently after upgrading from 9.0.5 to 9.2.1 Splunk Enterprise on a Linux kernel. In place upgrade from 9.2.1 to 9.2.2 didn't fixed the issue neither But comment... See more...
This issue first occurred intermittently after upgrading from 9.0.5 to 9.2.1 Splunk Enterprise on a Linux kernel. In place upgrade from 9.2.1 to 9.2.2 didn't fixed the issue neither But commenting out this line as such fixed it. #CFUNCTYPE(c_int)(lambda: None) Also tested on another box with the same issue, I've commented out the line first and then upgraded from 9.2.1 to 9.2.2, the upgrade override the fix, and I have to re-apply the fix again. Definitely raising a Splunk support case for this one Thank you!
Hi @Arsenii.Shub , Thank you for posting on community. I saw you raised a support case already for this. Hence, I would like to share what was the solution, result of my experimentation, and additi... See more...
Hi @Arsenii.Shub , Thank you for posting on community. I saw you raised a support case already for this. Hence, I would like to share what was the solution, result of my experimentation, and additional information. Issue: The URLs shown in BT/Transaction Snapshots are incomplete. Goal: Differentiate slow search requests in the system caused by specific user input. Tests: I tested the URL behavior on a .NET MVC web app. Solutions: URL Display on URL Column: While it’s not possible to show the full URL with  http://host/ , we can display the URL as  /Search/userInput . Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/custom-match-rules/net-business-transaction-detection/name-mvc-transactions-by-area-controller-and-action#id-.NameMVCTransactionsbyArea,Controller,andActionv23.1-MVCTransactionNaming Complete URL Display on BT name Column: It is possible to display the complete URL  https://host/Search/userInput  in the BT name. Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/uri-based-entry-points Next Steps: For Partial URL in URL column  /Search/userInput : Add App Server Agent Configuration. Set the following .NET Agent Configuration properties to false: aspdotnet-core-naming-controlleraction aspdotnet-core-naming-controllerarea     Restart the  AppDynamics.Agent.Coordinator_service and IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. For Complete URL in BT name  https://host/Search/userInput : Navigate to Configuration > Instrumentation > Transaction Detection in your Application. Add New Rules: Choose  Include , proper Agent type, and Current Entry Point. Fill in the Name Field (it will be shown on your BT). Set Priority higher than Default Automatic detection for prioritization.     Rule Configuration: Matching condition:  URL is not empty Custom Expression: ${HttpRequest.Scheme}://${HttpRequest.Host}${HttpRequest.Path}${HttpRequest.QueryString} Restart the  AppDynamics.Agent.Coordinator_service  and  IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. Final Result: Additional Information: You can also add the custom expression by modifying the default Auto detection rule instead off Add new one like how I did in the step above. Result from modifying the default auto detection below.    
Another technique you can use is to make use of TERM(xx) search - TERM() searches are much faster than raw data searches and let's assume your uri is  /partner/a/b/c/d you can do  index=tomcat TER... See more...
Another technique you can use is to make use of TERM(xx) search - TERM() searches are much faster than raw data searches and let's assume your uri is  /partner/a/b/c/d you can do  index=tomcat TERM(a) TERM(b) TERM(c) TERM(d) uri=/partner/a/b/c/d it will depend on how unique the terms are, but it will certainly provide a way to reduce the amount of data looked at. In the job properties, look at the scanCount property that will show you the number of events scanned to provide the results.  
Is the search slow to return just the last 60 minutes of data and does the performance degraded linearly as you increase the time interval. How many events do you get per 24h period? Are you just d... See more...
Is the search slow to return just the last 60 minutes of data and does the performance degraded linearly as you increase the time interval. How many events do you get per 24h period? Are you just doing a raw event search for 7 days to demonstrate the problem or is this part of your use case? Take a look at the job properties phase_0 property to see what your expanded search is. You can look at the monitoring console to see what the Splunk server metrics are looking like - perhaps there is a memory issue - take a look at the resource usage dashboards.