Getting Data In

How to write these Props & transforms configuration?

mounikad
Explorer

We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk. we have received the logs to splunk before adding props.conf and transforms.conf. we have the following configuration in props.conf & transforms.conf. 

/opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/transforms.conf

[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

[setparsing]
REGEX = result\=200
DEST_KEY = queue
FORMAT = indexQueue

[cloudnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

[cloudparsing]
REGEX = result\=pass
DEST_KEY = queue
FORMAT = indexQueue

 

/opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/props.conf

[alibaba:cloudfirewall]
TRANSFORMS-set= cloudnull,cloudparsing

[alibaba:waf]
TRANSFORMS-set= setnull,setparsing

 

But we are not receiving any logs to splunk for this although there are logs in alibaba cloud. Below is the inputs.conf file

 

/opt/splunk/etc/apps/TA-AlibabaCloudSLS/local/inputs.conf

[sls_datainput://Alibaba_Cloud_Firewall]
event_retry_times = 0
event_source = alibaba:cloudfirewall
event_sourcetype = alibaba:cloudfirewall
hec_timeout = 120
index = *****
interval = 300
protocol = private
sls_accesskey = *****
sls_cg = ******
sls_cursor_start_time = end
sls_data_fetch_interval = 1
sls_endpoint = *******
sls_heartbeat_interval = 60
sls_logstore = *****
sls_max_fetch_log_group_size = 1000
sls_project = *******
unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }

 

[sls_datainput://Alibaba_waf]
event_retry_times = 0
event_source = alibaba:waf
event_sourcetype = alibaba:waf
hec_timeout = 120
index = *****
interval = 300
protocol = private
sls_accesskey = ******
sls_cg = *******
sls_cursor_start_time = end
sls_data_fetch_interval = 1
sls_endpoint = ****
sls_heartbeat_interval = 60
sls_logstore = *****
sls_max_fetch_log_group_size = 1000
sls_project = ****
unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }

Labels (1)
0 Karma
1 Solution

VatsalJagani
SplunkTrust
SplunkTrust

@mounikad - your regex would be something like:

REGEX = \"status\":\s*\"200\"

REGEX = \"rule_result\":\s*\"pass\"

 

I hope this helps!!! Karma/upvote would be appreciated.

View solution in original post

VatsalJagani
SplunkTrust
SplunkTrust

@mounikad - Try the below configuration based on your description:


@mounikad wrote:

We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk.


 

transforms.conf

[setparsing]
REGEX = result\=200
DEST_KEY = queue
FORMAT = nullQueue

[cloudparsing]
REGEX = result\=pass
DEST_KEY = queue
FORMAT = nullQueue

 

 props.conf

[alibaba:cloudfirewall]
TRANSFORMS-filter_logs = cloudparsing

[alibaba:waf]
TRANSFORMS-filter_logs = setparsing

 

I hope this helps!!! Upvote/Karma would be appreciated!!!

0 Karma

mounikad
Explorer

Hi @VatsalJagani ,

Still we are getting the Result=pass, status=200 logs. we don't need these logs to be indexed. 

0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

@mounikad - There was a confusion between description and regex in the initial question.


Still we are getting the Result=pass, status=200 logs. we don't need these logs to be indexed. 


VS

[setparsing]
REGEX = result\=200
DEST_KEY = queue
FORMAT = indexQueue



You need to write in the regex exactly what you are seeing in the _raw events. the extracted fields will not work in the TRANSFORMS.

So here the assumption is that your _raw event has "result=200" somewhere in the _raw text.

 

I hope this helps!!!

0 Karma

mounikad
Explorer

Hi @VatsalJagani 

we have "status": "200" & "rule_result": "pass" in _raw text. I have used REGEX= status: 200 & REGEX = rule_result: pass. Still we are getting the logs. can you please let me know the REGEX we need to use for

"status": "200"

"rule_result": "pass"

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Yes, you have to put the strings literarily (or regexes matching the raw event contents). At this point Splunk has no awareness of fields of any kind (apart from index-time fields).

So you'd have to put in something like

REGEX = "status"\s*:\s*"200"

and

REGEX = "rule_result"\s*:\s*"pass"

(those spaces are thrown in just to be sure that it works even if the literal contents change a bit while still being a proper json).

But from what I'm seeing you're trying to do the opposite to what you're saying.

If you do set the nullQueue as the default and only put those specific events in the indexQueue, you have just that - you're indexing _only_ those event instead of all events _except_ those.

So you'd rather want something similar to what @VatsalJagani showed before.

Don't do the default nullQueue transform. Just do your transforms.conf like this:

[setparsing]
REGEX = "status"\s*:\s*"200"
DEST_KEY = queue
FORMAT = nullQueue

[cloudparsing]
REGEX = "rule_result"\s*:\s*"pass"
DEST_KEY = queue
FORMAT = nullQueue

 And the corresponding props.conf as shown above and you're good to go.

VatsalJagani
SplunkTrust
SplunkTrust

@mounikad - your regex would be something like:

REGEX = \"status\":\s*\"200\"

REGEX = \"rule_result\":\s*\"pass\"

 

I hope this helps!!! Karma/upvote would be appreciated.

Get Updates on the Splunk Community!

Splunk Classroom Chronicles: Training Tales and Testimonials

Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get ...

Access Tokens Page - New & Improved

Splunk Observability Cloud recently launched an improved design for the access tokens page for better ...

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...