I have configured Oauth in a custom account in the splunk salesforce Add-On app. After configuring the account and saving the configuration it reaches out to salesforce. I login to salesforce and i...
See more...
I have configured Oauth in a custom account in the splunk salesforce Add-On app. After configuring the account and saving the configuration it reaches out to salesforce. I login to salesforce and its states grant access. Once I click submit it comes back with an error "Error occurred while trying to authenticate. Please try Again" in the app. I am not sure what the issue is or if this is a need to configure something on the salesforce side.
Anything written by a script to stdout is indexed as a raw event by Splunk. You can use props.conf settings to extract fields from the event. By default, Splunk will extract key and values that are...
See more...
Anything written by a script to stdout is indexed as a raw event by Splunk. You can use props.conf settings to extract fields from the event. By default, Splunk will extract key and values that are in key=value format, so perhaps your PS script could do that.
@ITWhisperer I tried using above rex for these log source but not working: For below 5 different log source I like to extract node number like node06, node03, node01 E:\view\int\t4\apch\node\node...
See more...
@ITWhisperer I tried using above rex for these log source but not working: For below 5 different log source I like to extract node number like node06, node03, node01 E:\view\int\t4\apch\node\node06\log\server.log E:\view\int\t4\apch\node\node06\log\run.log E:\view\int\t4\apch\node\node03\log\server.log E:\view\int\t4\apch\node\node01\log\server.log E:\view\int\t4\apch\node\node01\log\run.log For below 3 log source I like to extract as core02, web37, core01 E:\view\int\t4\logs\core02-core.log E:\view\int\t4\logs\web37-wfmws.log E:\view\int\t4\logs\core01-core.log Since both log format is different above solution you shared is not working. Please help
Thank you @ITWhisperer. I was running stats again to capture count which was already present in the data along with hour as mentioned by you Here is final query : index=summary_index_1d "value=Su...
See more...
Thank you @ITWhisperer. I was running stats again to capture count which was already present in the data along with hour as mentioned by you Here is final query : index=summary_index_1d "value=Summary_test" app_name=abc HTTP_STATUS_CODE=2xx
| eval current_day = strftime(now(), "%A")
| eval log_day = strftime(_time, "%A")
| eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5
| stats avg(count_value) by log_day,hour,day Let me know if any other changes are required on query which can improve its performance. Thanks Again.
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulli...
See more...
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulling the_raw wouldn't include the renamed fields correct? If _raw is the way to go, what is the token for this? $result._raw$?
I've tried identifying all individual fields in events and extracted using rex. | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>"
| rex "\s\<externalFactorReturn\>(?<externa...
See more...
I've tried identifying all individual fields in events and extracted using rex. | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>"
| rex "\s\<externalFactorReturn\>(?<externalFactorReturn>.*)\<\/externalFactorReturn\>"
| rex "\<current\>(?<current>.*)\<\/current\>"
| rex "\<encrypted\>(?<encrypted>.*)\<\/encrypted\>"
| rex "\<keywordp\>(?<keywordp>.*)\<\/keywordp\>"
| rex "\<pepres\>(?<pepres>.*)\<\/pepres\>"
| rex "\<roleName\>(?<roleName>.*)\<\/roleName\>"
| rex "\<boriskhan\>(?<boriskhan>.*)\<\/boriskhan\>"
| rex "\<sload\>(?<sload>.*)\<\/sload\>"
| rex "\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>"
| rex "\<parkeristrator\>(?<parkeristrator>.*)\<\/parkeristrator\>"
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time. I get successes but then I get a bunch of these: ERR...
See more...
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time. I get successes but then I get a bunch of these: ERROR Exception resolving MIBs for table: 'MibTable' object has no attribute 'getIndicesFromInstId' stanza:snmp and then I get two of these ERROR Exception resolving MIBs for table: list modified during sort stanza:snmp and no more polling of this config. There are several IPs in this config. All works well for a 2-4 polling cycles, then it stops. I can do snmpwalk -v 2c -c $PUBLIC -m $MIB and I get good results. I did recently install a new MIB for this device. The old mib has the same issue and the other configs works fine regardless. I am thinking it is related to snmpwalk, but I am having little success in solutions. -- Frank
Please check this addon: https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Documentation says is CIM compatible: "The Splunk Add-on for Sysmon allows a Splunk software administr...
See more...
Please check this addon: https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Documentation says is CIM compatible: "The Splunk Add-on for Sysmon allows a Splunk software administrator to create a Splunk software data input and CIM-compliant field extractions for Microsoft Sysmon." If the addon feeds the Endpoint.Processes datamodel, then the use case you are interested in might work.
like? Could you please provide an example? If I will use stats it will merge the 4 events into 1 or not fill the empty ones / document type The main key fields are document_number and document_type...
See more...
like? Could you please provide an example? If I will use stats it will merge the 4 events into 1 or not fill the empty ones / document type The main key fields are document_number and document_type which are required further. So with: | stats max(timestamp1) as timestamp1, max(timestamp2) as timestamp2, ... by document_number will unify the events by document_number which is not what I would like to achieve as there are many other fields required, which are not shown in the example. | stats max(timestamp1) as timestamp1, max(timestamp2) as timestamp2, .. by document_number, document_type will do nothing as will select the event from itself and leave the empty fields empty. P.S.: sorry I forgot to add the datetime_type to the example pictures, will add them.
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search...
See more...
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$
| where isnotnull('messages.error')
| fields id savedsearch_name, app, user, executed_at, search, messages.* And you can kinda join this to the _audit query: index=_audit action=search (has_error_warn=true OR fully_completed_search=false OR info="bad_request")
| eval savedsearch_name = if(savedsearch_name="", "Ad-hoc", savedsearch_name)
| eval search_id = trim(search_id, "'")
| eval search = mvindex(search, 0)
| map search="| rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$
| where isnotnull('messages.error')
| fields id savedsearch_name, app, user, executed_at, search, messages.*" But it doesn't really work - I get lots of rest failures reported and the output is bad. You also need to run it when the search artifacts are present. Although my plan was to run this frequently and push the result to a summary index. Has anyone had better success with this? One thought would be to ingest the data that is returned by the rest call (I presume var/run/dispatch). Or might debug level logging help?
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field. How shall a PowerShell script write key-value pairs, so t...
See more...
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field. How shall a PowerShell script write key-value pairs, so that for Splunk there are separate keys and values instead of _raw?
Thank you for the reply. No, I'm not talking about dashboards at all. I want to do this within a search itself without having to use dashboards and tokens etc. I guess I'm kind of looking for a simil...
See more...
Thank you for the reply. No, I'm not talking about dashboards at all. I want to do this within a search itself without having to use dashboards and tokens etc. I guess I'm kind of looking for a similar functionality that you have in dashboards, but within search itself.