All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| rex max_match=0 "(?<keyvalue>\w+\s\[[^\]]+)" | mvexpand keyvalue | rex field=keyvalue "(?<key>\w+)\s\[(?<value>[^\]]+)" | eval {key}=value | fields - keyvalue key value | stats values(*) as * by _raw
not getting status field from 2nd search  Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in t... See more...
not getting status field from 2nd search  Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective status or not there it show as null for status 
It would help to know what results your query returned and why those results aren't good enough. I prefer the rex command for extracting fields.  The regular expressions below look for the given key... See more...
It would help to know what results your query returned and why those results aren't good enough. I prefer the rex command for extracting fields.  The regular expressions below look for the given keyword then extract what's between the following square brackets. | rex "Namespace \[(?<Namespace>[^\]]+)" | rex "ServiceName \[(?<ServiceName>[^\]]+)" | rex "Version \[(?<Version>[^\]]+)" | stats latest(Namespace) as Namespace latest(ServiceName) as ServiceName latest(Version) as Version by host | sort -Version  
See https://community.splunk.com/t5/Splunk-Enterprise/%E4%B8%AD%E9%96%93%E8%BB%A2%E9%80%81%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6-About-intermediate-transfer/m-p/655756#M17221
Were you able to find a solution to this? I'm having the same issue.
Hello @isoutamo, Thank you so much for your recommendation. It's working as expected, only chance I needed to make marked as Bold: [<Your sourcetype>] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+)... See more...
Hello @isoutamo, Thank you so much for your recommendation. It's working as expected, only chance I needed to make marked as Bold: [<Your sourcetype>] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true FIELD_DELIMITER=| FIELD_NAMES=f1,REG,USER,login,f5,f6,f7,src_ip,f9,f10,ts,f12,f13,f14,f15,f16,status TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3Q%:z (-%z) TIME_PREFIX=([^\|]*\|){10} MAX_TIMESTAMP_LOOKAHEAD=29
Hi , The query started working after upgrading the app to the 8.10 version.   @isoutamo , @gcusello Thank you for the help
Hi @gcusello, yes it's distributed on-prem installation. I am not using any add-on for ingesting data. I am using HTTP Event Collector Token to send AWS Cloudwatch logs to Splunk indexers (using loa... See more...
Hi @gcusello, yes it's distributed on-prem installation. I am not using any add-on for ingesting data. I am using HTTP Event Collector Token to send AWS Cloudwatch logs to Splunk indexers (using load balancing). From the GUI it's possible to select multiple indexes but use only the default index as the log index. So far all the logs are going to the default index and I don't see an option in the HEC settings or GUI where I can change the index name for partial logs coming through the HEC. I tried overriding the index value as you mentioned, but it doesn't work.  Any idea what's wrong in the below config? props.conf [source::syslogng:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto disabled = false transforms.conf [hecpaloalto] DEST_KEY = _MetaData:Index REGEX = (.*) FORMAT = palo_alto
I ended up having 1 alert that triggers on a cron schedule - and when it triggers it kicks off 1 email per result. That email has a tokenized variable which I then have used to direct WHERE the emai... See more...
I ended up having 1 alert that triggers on a cron schedule - and when it triggers it kicks off 1 email per result. That email has a tokenized variable which I then have used to direct WHERE the email goes, and also is used in generating a custom URL so from the email, someone can click that URL and be brought to a Splunk Dashboard containing the necessary data for said recipients.
Hi, I want to separate out below fields in table format. Raw = Namespace [com.sampple.ne.vas.events], ServiceName [flp-eg-cg], Version [0.0.1], isActive [true], AppliationType [EVENT] Query I a... See more...
Hi, I want to separate out below fields in table format. Raw = Namespace [com.sampple.ne.vas.events], ServiceName [flp-eg-cg], Version [0.0.1], isActive [true], AppliationType [EVENT] Query I am using = | eval Namespace=mvindex(split(mvindex(split(_raw, "Namespace "),1),"],"),1) | eval ServiceName=mvindex(split(mvindex(split(_raw,"ServiceName "),1),"],"),0) | eval Version=mvindex(split(mvindex(split(_raw,"Version "),1),"],"),0) | stats latest(Namespace) as Namespace latest(ServiceName) as ServiceName latest(Version) as Version by host | sort -Version Expected result Host AppName ServiceName Version                  
Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective st... See more...
Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective status or not there it show as null for status 
I see couple of logs starting with this log format too <><><><>||1407|| could you please provide the Rex expression with already provided solution @ITWhisperer 
I am not looking for makeresults as that will be hard coded.
This solution isn't reading the log file. The below modified query works for me but the similar query doesn't work for other fields like groupByUser etc. even after updating spath. Please if you can ... See more...
This solution isn't reading the log file. The below modified query works for me but the similar query doesn't work for other fields like groupByUser etc. even after updating spath. Please if you can advise and see if something needs to be adapted in the below. This is the most ideal query for but need to read other fields as well.   index=log-1696-nonprod-c laas_appId=tsproid_qa.sytsTaskRunner laas_file="/tmp/usage_snapshot.json" | head 1 | fields - _time ``` Convert the _raw to compliant JSON ``` | eval _raw="{"._raw."}" ``` Extract the groupByAction field - this resolves the escaped double quotes ``` | spath groupByAction ``` Extract the groups into a multi-valued field ``` | rex max_match=0 field=groupByAction "(?<group>\{[^\}]+\})" ``` Expand the multi-value field ``` | mvexpand group ``` Extract the fields from the group ``` | spath input=group ``` Output the table ``` | table action totalCount
Yes there is an event for my colleague for today.
Hi All, I would like to download the Splunk Add-on for AWS 6.0.0 Version documentation for my reference, but I spent some time to search in google and also from the https://docs.splunk.com/ but unab... See more...
Hi All, I would like to download the Splunk Add-on for AWS 6.0.0 Version documentation for my reference, but I spent some time to search in google and also from the https://docs.splunk.com/ but unable to fetch those details could any one guide me how to get the pervious release documentation from Splunk site.   Thanks in Advance.    
This solved the issue  | where '%field2'!='field1'  
Hi all, After running several actions from the EWS for O365 app (version 2.12.0) in phantom, the following error is received: "API failed. Status code: ErrorInvalidIdMalformed. Message: Id is malfo... See more...
Hi all, After running several actions from the EWS for O365 app (version 2.12.0) in phantom, the following error is received: "API failed. Status code: ErrorInvalidIdMalformed. Message: Id is malformed.". As per the app documentation, the expected field format for "Message ID" is not specified. I´m  using the Message Id field extracted from the original email headers. Is this correct? Is there any other way to obtain the message id? Wich is the expected format? Thanks in advance!  
@ITWhisperer , Sorry, but this not working in my case
For adding two KPIs  in SA topology, KPI queries that taken from Monitoring console are using REST API and are working only on Monitoring console and are not giving results at Search Head or ITSI whe... See more...
For adding two KPIs  in SA topology, KPI queries that taken from Monitoring console are using REST API and are working only on Monitoring console and are not giving results at Search Head or ITSI where they are required.  The error is - "Restricting the results of the rest operator to local instance because you do not have the dispatch_rest_to_indexers capability". How can this be proceeded with ?