All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More ... See more...
It doesn't appear that these get logged since the bulletin board does not log these into an index, but they are accessible via REST: | rest /services/admin/messages splunk_server=local   More details found here: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/managebulletins/
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_ra... See more...
You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g. index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_raw "message=(?<message>{.*}$)" | spath input=message | spath input=errors{}.errorDetails | table eventSource statusCode statusText  
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones... See more...
Just to add slightly to yuanliu's answer, you can use the "| addcoltotals" command if you would like to add another row containing the totals of the numerical columns. You'll have to convert the ones that contain non-numerical characters like the "(0%)" part.
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Finally figured out it was a permission issue. I didn't give splunk ownership over the index locations. 
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. H... See more...
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. However, containers are built without an artifact when created manually. While we could certainly train people to follow some manual steps to create an artifact or toggle the Artifact Dependency switch, that goes against the nature of SOAR and it's easy to miss something. It's easier to have a playbook create an artifact with those fields we need. Unfortunately, the Artifact Dependency switch defaults to off. So, the actual question: Has anyone found a way to change the default for the Artifact Dependency switch or to make a playbook run before an artifact is created?
Do not treat structured data such as JSON as text and be tempted to use rex for extraction.  Use QA tested Splunk command such as spath to extract from structure, then mvexpand to handle array.   |... See more...
Do not treat structured data such as JSON as text and be tempted to use rex for extraction.  Use QA tested Splunk command such as spath to extract from structure, then mvexpand to handle array.   | spath path=message{} | mvexpand message{} | spath input=message{}   Your sample will give you ERROR ERROR_1 ERROR_IND FUNCTION_NAME PROCESSED REMAINING SKIPPED TARGET_SYSTEM TOTAL id severity message{} 0 (0%) 0 0 CPW_02170 121257 0 35 (0%) SEQ 121257 0 Information {"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"} 0 (0%) 0 0 CPW_02171 26434 0 19 (0%) CPW 26434 0 Information {"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"} 0 (0%) 0 0 CPW_02172 2647812 0 19 (0%) SEQ 23343 0 Information {"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"} Here is a data emulation.  Play with it and compare with real data   | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":[{\"TARGET_SYSTEM\":\"SEQ\",\"FUNCTION_NAME\":\"CPW_02170\",\"TOTAL\":\"121257\",\"PROCESSED\":\"121257\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"35 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"},{\"TARGET_SYSTEM\":\"CPW\",\"FUNCTION_NAME\":\"CPW_02171\",\"TOTAL\":\"26434\",\"PROCESSED\":\"26434\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"19 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"},{\"TARGET_SYSTEM\":\"SEQ\",\"FUNCTION_NAME\":\"CPW_02172\",\"TOTAL\":\"23343\",\"PROCESSED\":\"2647812\",\"REMAINING\":\"0\",\"ERROR\":\"0 (0%)\",\"SKIPPED\":\"19 (0%)\",\"ERROR_IND\":\"0\",\"ERROR_1\":\"0\"}]}" | spath ``` data emulation above ```    
Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or cal... See more...
Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or call support. 
Hi, almost a year late to this thread, but experiencing the same. Has there been any resolution for you?   Thanks
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data con... See more...
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data contains very different JSON nodes that both have "LastModifiedBy", "RecordId", etc.  Your result table must distinguish between JSON nodes "servicechannel" and "omnisupervisionconfig", etc.  Does this make sense? Further, each of these nodes uses an array.  You may need to distinguish between each element in the arrays.  Because your illustration does not include multiple elements in the array, I cannot speculate what your developers' intention (semantics) is to use array for three distinct nodes.  It is possible that they committed the ultimate JSON data sin by assuming an implied semantic meaning in the arrays.   In light of this, I will not introduce the more intricate part of mixed kv-array data processing and just assume that all your data come with a single element in every of the three arrays. (When your developers give this type data, it is even more dangerous to use rex to extract data elements because no regex is compatible with the inherent flexibility afforded by the data structure.) Here is my suggestion:   | rex "^[^{]+(?<jsondata>.+)" | spath input=jsondata   Your sample data gives 3 sets of the 4 columns you desired, for a total of 12 columns.  That's too wide for display, so I will show a transposed table: field_name field_value livechatbutton{}.ComponentName LiveChatButton livechatbutton{}.LastmodifiedBy XYZ livechatbutton{}.ModifiedDate 2024-04-16T16:31:35.000Z livechatbutton{}.RecordId 5638X000000Xw55QAC omnisupervisorconfig{}.ComponentName OmniSupervisorConfig omnisupervisorconfig{}.LastmodifiedBy XYZ omnisupervisorconfig{}.ModifiedDate 2024-04-16T16:17:37.000Z omnisupervisorconfig{}.RecordId 0Q27X000000KyrESAS servicechannel{}.ComponentName ServiceChannel servicechannel{}.LastmodifiedBy XYZ servicechannel{}.ModifiedDate 2024-04-15T17:20:09.000Z servicechannel{}.RecordId 0N98X001200Gvv3SAC You must then decide how you want to present such data.  I do notice that each JSON node key and each ComponentName have an apparent semantic relationship.  If key name servicechannel and ComponentName ServiceChannel, etc., are indeed semantically related, your developers also committed a different type of data structure sin: that of duplicating semantic notation (without declaration).  The data could easily be presented without losing resolution but in a much simpler and more comprehensible form:   [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]   If you have any influence on developers, discuss data structure with them, ask them to clarify the intention/semantics of the structure and help improve structure.  This is better for everybody in the long run. If you have no influence, one possible way to deal with this mess is to ignore all the key names and treat them like single name, i.e., by assuming the data to be   [ {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]} ]   To do the equivalent in SPL,  (and also handle potential multiple array elements in absence of semantic knowledge)   | rex "^[^{]+(?<jsondata>.+)" | eval jsonnode = json_keys(jsondata) | foreach jsonnode mode=json_array [eval newjson = mvappend(newjson, json_object("array", json_extract(jsondata, <<ITEM>>)))] | mvexpand newjson | spath input=newjson path=array{} | mvexpand array{} ``` potential multiple elements ``` | spath input=array{}    Your sample data will give ComponentName LastmodiefiedBy ModifiedDate RecordId newjson ServiceChannel XYZ 2024-04-15T17:20:09.000Z 0N98X001200Gvv3SAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-15T17:20:09.000Z","ComponentName":"ServiceChannel","RecordId":"0N98X001200Gvv3SAC"}]} OmniSupervisorConfig XYZ 2024-04-16T16:17:37.000Z 0Q27X000000KyrESAS {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:17:37.000Z","ComponentName":"OmniSupervisorConfig","RecordId":"0Q27X000000KyrESAS"}]} LiveChatButton XYZ 2024-04-16T16:31:35.000Z 5638X000000Xw55QAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:31:35.000Z","ComponentName":"LiveChatButton","RecordId":"5638X000000Xw55QAC"}]} Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw="message: Updated Components { \"servicechannel\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-15T17:20:09.000Z\", \"ComponentName\": \"ServiceChannel\", \"RecordId\": \"0N98X001200Gvv3SAC\" } ], \"omnisupervisorconfig\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:17:37.000Z\", \"ComponentName\": \"OmniSupervisorConfig\", \"RecordId\": \"0Q27X000000KyrESAS\" } ], \"livechatbutton\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:31:35.000Z\", \"ComponentName\": \"LiveChatButton\", \"RecordId\": \"5638X000000Xw55QAC\" } ] }"  
Could you try explicitly disabling workload management? Tscroggins has the instructions to do that above.
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND"... See more...
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"}]} I want to extract all fields in the form of table from  "message" which is holding JSON array . And I want a total row for each column where total running total will display for each numeric column based on TARGET_SYSTEM . 
All I can say is use now() instead of _time to use in the evaluation on whether to trigger or not on the solution provided earlier ? Do you have any test data to show your attribute values to help ... See more...
All I can say is use now() instead of _time to use in the evaluation on whether to trigger or not on the solution provided earlier ? Do you have any test data to show your attribute values to help figure out why its false triggering ? | eval current_time=now() | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0)
@sjringo  - We don't have specific date as it keeps changing so I created two variables where I specify date and time .    | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M... See more...
@sjringo  - We don't have specific date as it keeps changing so I created two variables where I specify date and time .    | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S  
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message If the fields were re... See more...
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message If the fields were re-ordered or an extra field was in the message (without an embedded space),  then the rex would not be required The problem is less embedded space, more lack of embedded quotation marks/proper field separator.  It is semantically more pleasing to fix structure with rex than using rex to extract one data snippet when most are extracted with extract command. (But if you have any influence on developers, beg them to add quotation marks - more on this later.)   | rex field=message mode=sed "s/Date=/&\"/ s/$/\"/" | rename message as _raw | extract   It would give you the same result like CPWRemaining CPWTotal EASRemaining EAStatal InvetoryDate SEQRemaining SEQTotal VRSRemaining VRSTotal id severity 5612 749860 15 1062804 4/16/2024 7:34:25 PM 32746 1026137 0 238 0 Information About feedback to developers.  @ITWhisperer gave one option by taking advantage of a side effect/gem feature) from Splunk's extract command by adding a comma at the end of every key-value pair.  They do not have to swap order, but also by simply adding a literal comma after each value, like this:   {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM,"}   A more robust fix (that does not rely on Splunk's "generosity") is to properly quote the value.  Any language can extract that without the programmer's attention.   {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=\"4/16/2024 7:34:25 PM\""}   The logic should be simple enough: Numeric data, no quote, string data, quote.  
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omni... See more...
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omnisupervisorconfig": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ], "livechatbutton": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ] }   LastModifiedBy ModifiedBy Component RecordId
thanks
Close. The count will depend on how many of those hostname-customer_name pairs you have. If you have just one (supposedly only from the lookup), it will be 1. If you have two (from the lookup and fro... See more...
Close. The count will depend on how many of those hostname-customer_name pairs you have. If you have just one (supposedly only from the lookup), it will be 1. If you have two (from the lookup and from the data), it will be two. At least it should work this way. Of course I don't know either your events nor your lookup contents so I'm only deducing the data format from your searches. See this short run-anywhere example. | makeresults | eval hosts=split("a,b,c,d:a,b,c",":") | mvexpand hosts | eval customers="aaa" | eval hosts=split(hosts,",") This prepares some mockup data. You have one row supposedly from your summarized values from events and one from the summarized lookup. After you add | stats count by customers hosts You get three rows with count of 2 and one row with count of 1  
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPa... See more...
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPath (eg. C:\ for Win or / for *nix) fileSystemType (eg. ext3, NTFS, etc...) Ref #1: https://vdc-download.vmware.com/vmwb-repository/dcr-public/184bb3ba-6fa8-4574-a767-d0c96e2a38f4/ba9422ef-405c-47dd-8553-e11b619185b2/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestInfo.DiskInfo.html Ref #2: https://developer.vmware.com/apis/vsphere-automation/latest/vcenter/api/vcenter/vm/vm/guest/local-filesystem/get/ I believe RVTools, and some monitoring tools are using this specific api to grab info about local file system on the guest vm.   So far I was able to find metrics regarding datastore usage. This is fine, but equally important metric is local disk utilization of the guest vm. Which metric is responsible for getting this info in VMWare or VMWare Metrics AddOns? https://docs.splunk.com/Documentation/AddOns/released/VMW/Sourcetypes https://docs.splunk.com/Documentation/AddOns/released/VMWmetrics/Sourcetypes If none of the listed. Is there a way to customize VMW or VMWmetrics AddOns to grab this crucial information about VMs from vCenter? Perhaps I should look elsewhere - I mean different App/AddOn?