All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem li... See more...
If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem likely that the search has not changed
OK, I'm unsure where the time will get extracted, but have you looked at this document https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/EdgeProcessor/TimeExtractionPipeline  
Hi @KendallW ,     The error is "Invalid username or password.  However, I am able to connect using other applications to the same database with that username and password in the Identity and that i... See more...
Hi @KendallW ,     The error is "Invalid username or password.  However, I am able to connect using other applications to the same database with that username and password in the Identity and that is what I am using in the jdbc url to access.  
Do you have an example of how the props.conf would look like with that rule? I've tried several sentences but it still doesn't take it.
I did what you explained to me but it still doesn't work, when I check the zscaler logs apun the url_domain field does not appear. It is important to mention that I am implementing this from a custo... See more...
I did what you explained to me but it still doesn't work, when I check the zscaler logs apun the url_domain field does not appear. It is important to mention that I am implementing this from a custom app for zsacaler.
@sjringo  - This is the result when servers are taking traffic . I am going to test it tonight when servers goes down if alert is getting triggered outside window as well as alert not triggered durin... See more...
@sjringo  - This is the result when servers are taking traffic . I am going to test it tonight when servers goes down if alert is getting triggered outside window as well as alert not triggered during window . In both cases atleast one server is down.
Example rex |rex ".*\"LastmodifiedBy\":\s\"(?<LastmodifiedBy>[^\"]+)\"" |rex ".*\"ModifiedDate\":\s\"(?<ModifiedDate>[^\"]+)\"" |rex ".*\"ComponentName\":\s\"(?<ComponentName>[^\"]+)\"" |rex ".... See more...
Example rex |rex ".*\"LastmodifiedBy\":\s\"(?<LastmodifiedBy>[^\"]+)\"" |rex ".*\"ModifiedDate\":\s\"(?<ModifiedDate>[^\"]+)\"" |rex ".*\"ComponentName\":\s\"(?<ComponentName>[^\"]+)\"" |rex ".*\"RecordId\":\s\"(?<RecordId>[^\"]+)\""
Thanks, it looks like contain successful response, can we exclude it?   Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxxtraceId","clientContext":"xxxclientContext","card... See more...
Thanks, it looks like contain successful response, can we exclude it?   Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxxtraceId","clientContext":"xxxclientContext","cardTokenReferenceId":"xxxCardTokenReferenceId","eventSource":"bulkDelete","walletWebResponse":{"clientContext":"xxxclientContext","ewSID":"xxxSID,"timestampISO8601":"2024-04-05T00:00:14Z","statusCode":"0","statusText":"Success"}}  
Thanks for the quick reply! One correction to something I said earlier: the format of the "Date" in my lookup file is YYYY-MM-DD. It is in the same dashboard. I tried what you had mentioned already... See more...
Thanks for the quick reply! One correction to something I said earlier: the format of the "Date" in my lookup file is YYYY-MM-DD. It is in the same dashboard. I tried what you had mentioned already, but with the global parameters within quotes. That didn't seem to return what I wanted, but it did not lead to an error. Then I tried without quotes, and I get this error: Error in 'where' command: The operator at 'mon@mon AND Date<=@mon ' is invalid. The where clause is like: where customer="XYZ" AND Date>=$global_time.earliest$" AND Date<=$global_time.latest$" I've also tried this: | inputlookup mylookup.csv | eval lookupfiledatestart =strftime($global_time.earliest$, "%Y-%m-%d") | eval lookupfiledateend =strftime($global_time.latest$, "%Y-%m-%d") | where client="XYZ" AND Date>=lookupfiledatestart AND Date<=lookupfiledateend That gives me this error: Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '@mon, "%Y-%m-%d")'.
It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More ... See more...
It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More details found here: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/managebulletins/
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_ra... See more...
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_raw "message=(?<message>{.*}$)" | spath input=message | spath input=errors{}.errorDetails | table eventSource statusCode statusText  
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones... See more...
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones that contain non-numerical characters like the "(0%)" part.
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Finally figured out it was a permission issue. I didn't give splunk ownership over the index locations. 
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. H... See more...
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. However, containers are built without an artifact when created manually. While we could certainly train people to follow some manual steps to create an artifact or toggle the Artifact Dependency switch, that goes against the nature of SOAR and it's easy to miss something. It's easier to have a playbook create an artifact with those fields we need. Unfortunately, the Artifact Dependency switch defaults to off. So, the actual question: Has anyone found a way to change the default for the Artifact Dependency switch or to make a playbook run before an artifact is created?
Do not treat structured data such as JSON as text and be tempted to use rex for extraction.  Use QA tested Splunk command such as spath to extract from structure, then mvexpand to handle array.   |... See more...
Do not treat structured data such as JSON as text and be tempted to use rex for extraction.  Use QA tested Splunk command such as spath to extract from structure, then mvexpand to handle array.   | spath path=message{} | mvexpand message{} | spath input=message{}   Your sample will give you ERROR ERROR_1 ERROR_IND FUNCTION_NAME PROCESSED REMAINING SKIPPED TARGET_SYSTEM TOTAL id severity message{} 0 (0%) 0 0 CPW_02170 121257 0 35 (0%) SEQ 121257 0 Information {"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"} 0 (0%) 0 0 CPW_02171 26434 0 19 (0%) CPW 26434 0 Information {"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"} 0 (0%) 0 0 CPW_02172 2647812 0 19 (0%) SEQ 23343 0 Information {"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"} Here is a data emulation.  Play with it and compare with real data   | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":[{\"TARGET_SYSTEM\":\"SEQ\",\"FUNCTION_NAME\":\"CPW_02170\",\"TOTAL\":\"121257\",\"PROCESSED\":\"121257\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"35 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"},{\"TARGET_SYSTEM\":\"CPW\",\"FUNCTION_NAME\":\"CPW_02171\",\"TOTAL\":\"26434\",\"PROCESSED\":\"26434\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"19 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"},{\"TARGET_SYSTEM\":\"SEQ\",\"FUNCTION_NAME\":\"CPW_02172\",\"TOTAL\":\"23343\",\"PROCESSED\":\"2647812\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"19 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"}]}" | spath ``` data emulation above ```    
Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or cal... See more...
Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or call support. 
Hi, almost a year late to this thread, but experiencing the same. Has there been any resolution for you?   Thanks
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data con... See more...
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data contains very different JSON nodes that both have "LastModifiedBy", "RecordId", etc.  Your result table must distinguish between JSON nodes "servicechannel" and "omnisupervisionconfig", etc.  Does this make sense? Further, each of these nodes uses an array.  You may need to distinguish between each element in the arrays.  Because your illustration does not include multiple elements in the array, I cannot speculate what your developers' intention (semantics) is to use array for three distinct nodes.  It is possible that they committed the ultimate JSON data sin by assuming an implied semantic meaning in the arrays.   In light of this, I will not introduce the more intricate part of mixed kv-array data processing and just assume that all your data come with a single element in every of the three arrays. (When your developers give this type data, it is even more dangerous to use rex to extract data elements because no regex is compatible with the inherent flexibility afforded by the data structure.) Here is my suggestion:   | rex "^[^{]+(?<jsondata>.+)" | spath input=jsondata   Your sample data gives 3 sets of the 4 columns you desired, for a total of 12 columns.  That's too wide for display, so I will show a transposed table: field_name field_value livechatbutton{}.ComponentName LiveChatButton livechatbutton{}.LastmodifiedBy XYZ livechatbutton{}.ModifiedDate 2024-04-16T16:31:35.000Z livechatbutton{}.RecordId 5638X000000Xw55QAC omnisupervisorconfig{}.ComponentName OmniSupervisorConfig omnisupervisorconfig{}.LastmodifiedBy XYZ omnisupervisorconfig{}.ModifiedDate 2024-04-16T16:17:37.000Z omnisupervisorconfig{}.RecordId 0Q27X000000KyrESAS servicechannel{}.ComponentName ServiceChannel servicechannel{}.LastmodifiedBy XYZ servicechannel{}.ModifiedDate 2024-04-15T17:20:09.000Z servicechannel{}.RecordId 0N98X001200Gvv3SAC You must then decide how you want to present such data.  I do notice that each JSON node key and each ComponentName have an apparent semantic relationship.  If key name servicechannel and ComponentName ServiceChannel, etc., are indeed semantically related, your developers also committed a different type of data structure sin: that of duplicating semantic notation (without declaration).  The data could easily be presented without losing resolution but in a much simpler and more comprehensible form:   [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]   If you have any influence on developers, discuss data structure with them, ask them to clarify the intention/semantics of the structure and help improve structure.  This is better for everybody in the long run. If you have no influence, one possible way to deal with this mess is to ignore all the key names and treat them like single name, i.e., by assuming the data to be   [ {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]} ]   To do the equivalent in SPL,  (and also handle potential multiple array elements in absence of semantic knowledge)   | rex "^[^{]+(?<jsondata>.+)" | eval jsonnode = json_keys(jsondata) | foreach jsonnode mode=json_array [eval newjson = mvappend(newjson, json_object("array", json_extract(jsondata, <<ITEM>>)))] | mvexpand newjson | spath input=newjson path=array{} | mvexpand array{} ``` potential multiple elements ``` | spath input=array{}    Your sample data will give ComponentName LastmodiefiedBy ModifiedDate RecordId newjson ServiceChannel XYZ 2024-04-15T17:20:09.000Z 0N98X001200Gvv3SAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-15T17:20:09.000Z","ComponentName":"ServiceChannel","RecordId":"0N98X001200Gvv3SAC"}]} OmniSupervisorConfig XYZ 2024-04-16T16:17:37.000Z 0Q27X000000KyrESAS {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:17:37.000Z","ComponentName":"OmniSupervisorConfig","RecordId":"0Q27X000000KyrESAS"}]} LiveChatButton XYZ 2024-04-16T16:31:35.000Z 5638X000000Xw55QAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:31:35.000Z","ComponentName":"LiveChatButton","RecordId":"5638X000000Xw55QAC"}]} Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw="message: Updated Components { \"servicechannel\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-15T17:20:09.000Z\", \"ComponentName\": \"ServiceChannel\", \"RecordId\": \"0N98X001200Gvv3SAC\" } ], \"omnisupervisorconfig\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:17:37.000Z\", \"ComponentName\": \"OmniSupervisorConfig\", \"RecordId\": \"0Q27X000000KyrESAS\" } ], \"livechatbutton\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:31:35.000Z\", \"ComponentName\": \"LiveChatButton\", \"RecordId\": \"5638X000000Xw55QAC\" } ] }"