All Posts

Top

All Posts

Have you seen the Admin's Little Helper app (https://splunkbase.splunk.com/app/6368).  It includes a btool command that lets you see your configurations on both SH and indexers using SPL. While many... See more...
Have you seen the Admin's Little Helper app (https://splunkbase.splunk.com/app/6368).  It includes a btool command that lets you see your configurations on both SH and indexers using SPL. While many configurables can be loaded safely on either/both SH and indexer, others cannot.  Inputs and outputs are good examples.  Clustering settings are another.
What is your question?
I have created two queries : The below is for the correct outage window  And the second one with any random date to see if alert is triggered when one of server goes down  Both has ... See more...
I have created two queries : The below is for the correct outage window  And the second one with any random date to see if alert is triggered when one of server goes down  Both has same trigger condition set : | where is_maintenance_window=0 AND is_server_down=1
When your testing just keep in mind that this is the time from the log event. | eval current_time=_time While this is the current time now, when the alert is running. So, depending upon your lookb... See more...
When your testing just keep in mind that this is the time from the log event. | eval current_time=_time While this is the current time now, when the alert is running. So, depending upon your lookback period (earliest= latest=) you might be picking up log events outside (prior or after) your outage window start time/end time.  | eval current_time=now() But, if you dont want any alerts during the outage window now() should be the correct time to be using for your triggering conditions
REPORT-url_domain It's the name of the field you want to assign the result to.  
If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem li... See more...
If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem likely that the search has not changed
OK, I'm unsure where the time will get extracted, but have you looked at this document https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/EdgeProcessor/TimeExtractionPipeline  
Hi @KendallW ,     The error is "Invalid username or password.  However, I am able to connect using other applications to the same database with that username and password in the Identity and that i... See more...
Hi @KendallW ,     The error is "Invalid username or password.  However, I am able to connect using other applications to the same database with that username and password in the Identity and that is what I am using in the jdbc url to access.  
Do you have an example of how the props.conf would look like with that rule? I've tried several sentences but it still doesn't take it.
I did what you explained to me but it still doesn't work, when I check the zscaler logs apun the url_domain field does not appear. It is important to mention that I am implementing this from a custo... See more...
I did what you explained to me but it still doesn't work, when I check the zscaler logs apun the url_domain field does not appear. It is important to mention that I am implementing this from a custom app for zsacaler.
@sjringo  - This is the result when servers are taking traffic . I am going to test it tonight when servers goes down if alert is getting triggered outside window as well as alert not triggered durin... See more...
@sjringo  - This is the result when servers are taking traffic . I am going to test it tonight when servers goes down if alert is getting triggered outside window as well as alert not triggered during window . In both cases atleast one server is down.
Example rex |rex ".*\"LastmodifiedBy\":\s\"(?<LastmodifiedBy>[^\"]+)\"" |rex ".*\"ModifiedDate\":\s\"(?<ModifiedDate>[^\"]+)\"" |rex ".*\"ComponentName\":\s\"(?<ComponentName>[^\"]+)\"" |rex ".... See more...
Example rex |rex ".*\"LastmodifiedBy\":\s\"(?<LastmodifiedBy>[^\"]+)\"" |rex ".*\"ModifiedDate\":\s\"(?<ModifiedDate>[^\"]+)\"" |rex ".*\"ComponentName\":\s\"(?<ComponentName>[^\"]+)\"" |rex ".*\"RecordId\":\s\"(?<RecordId>[^\"]+)\""
Thanks, it looks like contain successful response, can we exclude it?   Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxxtraceId","clientContext":"xxxclientContext","card... See more...
Thanks, it looks like contain successful response, can we exclude it?   Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxxtraceId","clientContext":"xxxclientContext","cardTokenReferenceId":"xxxCardTokenReferenceId","eventSource":"bulkDelete","walletWebResponse":{"clientContext":"xxxclientContext","ewSID":"xxxSID,"timestampISO8601":"2024-04-05T00:00:14Z","statusCode":"0","statusText":"Success"}}  
Thanks for the quick reply! One correction to something I said earlier: the format of the "Date" in my lookup file is YYYY-MM-DD. It is in the same dashboard. I tried what you had mentioned already... See more...
Thanks for the quick reply! One correction to something I said earlier: the format of the "Date" in my lookup file is YYYY-MM-DD. It is in the same dashboard. I tried what you had mentioned already, but with the global parameters within quotes. That didn't seem to return what I wanted, but it did not lead to an error. Then I tried without quotes, and I get this error: Error in 'where' command: The operator at 'mon@mon AND Date<=@mon ' is invalid. The where clause is like: where customer="XYZ" AND Date>=$global_time.earliest$" AND Date<=$global_time.latest$" I've also tried this: | inputlookup mylookup.csv | eval lookupfiledatestart =strftime($global_time.earliest$, "%Y-%m-%d") | eval lookupfiledateend =strftime($global_time.latest$, "%Y-%m-%d") | where client="XYZ" AND Date>=lookupfiledatestart AND Date<=lookupfiledateend That gives me this error: Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '@mon, "%Y-%m-%d")'.
It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More ... See more...
It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More details found here: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/managebulletins/
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_ra... See more...
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_raw "message=(?<message>{.*}$)" | spath input=message | spath input=errors{}.errorDetails | table eventSource statusCode statusText  
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones... See more...
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones that contain non-numerical characters like the "(0%)" part.
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Finally figured out it was a permission issue. I didn't give splunk ownership over the index locations.