All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@hanks @richgalloway this help
Thanks @ITWhisperer 
It depends on the type of Excel file.  A .xls file is binary and so will not be ingested by Splunk.  The UF's splunkd.log file should confirm this. Newer Excel files are .xlsx, which is XML format. ... See more...
It depends on the type of Excel file.  A .xls file is binary and so will not be ingested by Splunk.  The UF's splunkd.log file should confirm this. Newer Excel files are .xlsx, which is XML format.  That can be ingested by Splunk, but may be of limited utility if you can't interpret the XML. There's also .xlsm files, which contain macros, but I'm not sure how they're stored. Again, the UF should log a message when it's unable to monitor/ingest a file.
No changes need to be made to the data.  Just configure the HF as described earlier.  there is an app you must download from your Splunk Cloud search head. Go to the "Universal Forwarder" app and cli... See more...
No changes need to be made to the data.  Just configure the HF as described earlier.  there is an app you must download from your Splunk Cloud search head. Go to the "Universal Forwarder" app and click the green download button. Install the downloaded app on the HFs. Despite the name, the app can be used on either UFs or HFs.
What action is in that event? Why was it not found by your search?
Probably because your example does not adequately reflect your actual data e.g. do you have special characters which would disrupt a regex match?
| rex max_match=0 "(?<keyvalue>\w+\s\[[^\]]+)" | mvexpand keyvalue | rex field=keyvalue "(?<key>\w+)\s\[(?<value>[^\]]+)" | eval {key}=value | fields - keyvalue key value | stats values(*) as * by _raw
not getting status field from 2nd search  Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in t... See more...
not getting status field from 2nd search  Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective status or not there it show as null for status 
It would help to know what results your query returned and why those results aren't good enough. I prefer the rex command for extracting fields.  The regular expressions below look for the given key... See more...
It would help to know what results your query returned and why those results aren't good enough. I prefer the rex command for extracting fields.  The regular expressions below look for the given keyword then extract what's between the following square brackets. | rex "Namespace \[(?<Namespace>[^\]]+)" | rex "ServiceName \[(?<ServiceName>[^\]]+)" | rex "Version \[(?<Version>[^\]]+)" | stats latest(Namespace) as Namespace latest(ServiceName) as ServiceName latest(Version) as Version by host | sort -Version  
See https://community.splunk.com/t5/Splunk-Enterprise/%E4%B8%AD%E9%96%93%E8%BB%A2%E9%80%81%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6-About-intermediate-transfer/m-p/655756#M17221
Were you able to find a solution to this? I'm having the same issue.
Hello @isoutamo, Thank you so much for your recommendation. It's working as expected, only chance I needed to make marked as Bold: [<Your sourcetype>] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+)... See more...
Hello @isoutamo, Thank you so much for your recommendation. It's working as expected, only chance I needed to make marked as Bold: [<Your sourcetype>] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true FIELD_DELIMITER=| FIELD_NAMES=f1,REG,USER,login,f5,f6,f7,src_ip,f9,f10,ts,f12,f13,f14,f15,f16,status TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3Q%:z (-%z) TIME_PREFIX=([^\|]*\|){10} MAX_TIMESTAMP_LOOKAHEAD=29
Hi , The query started working after upgrading the app to the 8.10 version.   @isoutamo , @gcusello Thank you for the help
Hi @gcusello, yes it's distributed on-prem installation. I am not using any add-on for ingesting data. I am using HTTP Event Collector Token to send AWS Cloudwatch logs to Splunk indexers (using loa... See more...
Hi @gcusello, yes it's distributed on-prem installation. I am not using any add-on for ingesting data. I am using HTTP Event Collector Token to send AWS Cloudwatch logs to Splunk indexers (using load balancing). From the GUI it's possible to select multiple indexes but use only the default index as the log index. So far all the logs are going to the default index and I don't see an option in the HEC settings or GUI where I can change the index name for partial logs coming through the HEC. I tried overriding the index value as you mentioned, but it doesn't work.  Any idea what's wrong in the below config? props.conf [source::syslogng:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto disabled = false transforms.conf [hecpaloalto] DEST_KEY = _MetaData:Index REGEX = (.*) FORMAT = palo_alto
I ended up having 1 alert that triggers on a cron schedule - and when it triggers it kicks off 1 email per result. That email has a tokenized variable which I then have used to direct WHERE the emai... See more...
I ended up having 1 alert that triggers on a cron schedule - and when it triggers it kicks off 1 email per result. That email has a tokenized variable which I then have used to direct WHERE the email goes, and also is used in generating a custom URL so from the email, someone can click that URL and be brought to a Splunk Dashboard containing the necessary data for said recipients.
Hi, I want to separate out below fields in table format. Raw = Namespace [com.sampple.ne.vas.events], ServiceName [flp-eg-cg], Version [0.0.1], isActive [true], AppliationType [EVENT] Query I a... See more...
Hi, I want to separate out below fields in table format. Raw = Namespace [com.sampple.ne.vas.events], ServiceName [flp-eg-cg], Version [0.0.1], isActive [true], AppliationType [EVENT] Query I am using = | eval Namespace=mvindex(split(mvindex(split(_raw, "Namespace "),1),"],"),1) | eval ServiceName=mvindex(split(mvindex(split(_raw,"ServiceName "),1),"],"),0) | eval Version=mvindex(split(mvindex(split(_raw,"Version "),1),"],"),0) | stats latest(Namespace) as Namespace latest(ServiceName) as ServiceName latest(Version) as Version by host | sort -Version Expected result Host AppName ServiceName Version                  
Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective st... See more...
Not showing any results. actually i want to add status column based on the 2nd search results in the first. if any hostname is matches with node name in the 2nd comun then it show the respective status or not there it show as null for status 
I see couple of logs starting with this log format too <><><><>||1407|| could you please provide the Rex expression with already provided solution @ITWhisperer 
I am not looking for makeresults as that will be hard coded.
This solution isn't reading the log file. The below modified query works for me but the similar query doesn't work for other fields like groupByUser etc. even after updating spath. Please if you can ... See more...
This solution isn't reading the log file. The below modified query works for me but the similar query doesn't work for other fields like groupByUser etc. even after updating spath. Please if you can advise and see if something needs to be adapted in the below. This is the most ideal query for but need to read other fields as well.   index=log-1696-nonprod-c laas_appId=tsproid_qa.sytsTaskRunner laas_file="/tmp/usage_snapshot.json" | head 1 | fields - _time ``` Convert the _raw to compliant JSON ``` | eval _raw="{"._raw."}" ``` Extract the groupByAction field - this resolves the escaped double quotes ``` | spath groupByAction ``` Extract the groups into a multi-valued field ``` | rex max_match=0 field=groupByAction "(?<group>\{[^\}]+\})" ``` Expand the multi-value field ``` | mvexpand group ``` Extract the fields from the group ``` | spath input=group ``` Output the table ``` | table action totalCount