Getting Data In

How to correctly configure Splunk to monitor csv structure log?

phamxuantung
Communicator

Hello,

Our Splunk Enterprise structure are 1 Master, 2 Search Head and 4 Indexer Cluster. The Master will configure Forwarder Management and the deployment apps stay there.

Now I want to index some logs file from a server (which already installed an UF) with csv structure, but not have csv extension.

The log file is like this

 

 

 

api_key,api_method_name,bytes,cache_hit,client_transfer_time,connect_time,endpoint_name,http_method,http_status_code,http_version,oauth_access_token,package_name,package_uuid,plan_name,plan_uuid,pre_transfer_time,qps_throttle_value,quota_value,referrer,remote_total_time,request_host_name,request_id,request_time,request_uuid,response_string,service_definition_endpoint_uuid,service_id,service_name,src_ip,ssl_enabled,total_request_exec_time,traffic_manager,traffic_manager_error_code,uri,user_agent,org_name,org_uuid,sub_org_name,sub_org_uuid
unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641598.598_unknown_unknown,2023-02-05T23:59:58,dafeac38-123d-4bb7-aa1c-59680afbc0b2,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,-
unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641608.030_unknown_unknown,2023-02-06T00:00:08,e4cd645a-5471-4097-baf0-67f90f4d2cee,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.001,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,-
unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641618.607_unknown_unknown,2023-02-06T00:00:18,ee18e506-2ea5-4792-a586-f0274e6c823b,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,-
unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641627.988_unknown_unknown,2023-02-06T00:00:27,5cc9f704-61a3-443c-b670-26373afe5502,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,-

 

 

 

The log file is named as: access_worker5_2023_2_5.log or access_worker5_2023_2_5.log.1

Before config input.conf in my deployment I config props.conf and transfroms.conf in my Search Head in /splunk/etc/apps/search/local as

props.conf

 

 

 

[mllog.new]
CHARSET = UTF-8
INDEXED_EXTRACTIONS = csv
KV_MODE = none
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
category = Structured
description = sourcetype for index csv
disabled = false
pulldown_type = true
FIELD_NAMES = api_key,api_method_name,bytes,cache_hit,client_transfer_time,connect_time,endpoint_name,http_method,http_status_code,http_version,oauth_access_token,package_name,package_uuid,plan_name,plan_uuid,pre_transfer_time,qps_throttle_value,quota_value,referrer,remote_total_time,request_host_name,request_id,request_time,request_uuid,response_string,service_definition_endpoint_uuid,service_id,service_name,src_ip,ssl_enabled,total_request_exec_time,traffic_manager,traffic_manager_error_code,uri,user_agent,org_name,org_uuid,sub_org_name,sub_org_uuid
TIMESTAMP_FIELDS = request_time
REPORT-tibco-mllog-new = REPORT-tibco-mllog-new
DATETIME_CONFIG = 
HEADER_FIELD_LINE_NUMBER = 1
FIELD_DELIMITER = ,
HEADER_FIELD_DELIMITER = ,

###THIS IS FOR TESTING###
[mllog.new2]
DATETIME_CONFIG = 
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
category =  Custom
disalbe = false
pulldown_type = 1

 

 

 

 

transfroms.conf

 

 

 

[REPORT-tibco-mllog-new]
DELIMS = ","
FIELDS = "api_key","api_method_name","bytes","cache_hit","client_transfer_time","connect_time","endpoint_name","http_method","http_status_code","http_version","oauth_access_token","package_name","package_uuid","plan_name","plan_uuid","pre_transfer_time","qps_throttle_value","quota_value","referrer","remote_total_time","request_host_name","request_id","request_time","request_uuid","response_string","service_definition_endpoint_uuid","service_id","service_name","src_ip","ssl_enabled","total_request_exec_time","traffic_manager","traffic_manager_error_code","uri","user_agent","org_name","org_uuid","sub_org_name","sub_org_uuid"

 

 

 

 

Then I config deployment apps like normal, with input.conf like this

input.conf

 

 

 

[monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*]
disabled = 0
index = myindex
sourcetype = mllog.new

###THIS IS FOR TESTING###
[monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*]
disabled = 0
index = myindex
sourcetype = mllog.new2

 

 

 

 

After configuration and restart. I ran 2 query

index = myindex sourcetype = mllog.new

-> 0 events

index = myindex sourcetype = mllog.new2

-> Have events, but not with correct line breaking, some event have 1 line (correct) some events have 2 lines or even 257 lines (which clearly wrong), indexed header and don't have fields seperation.

 

So clearly I have config wrong somewhere, can someone point me to the right direction.

0 Karma
Get Updates on the Splunk Community!

Get ready to show some Splunk Certification swagger at .conf24!

Dive into the deep end of data by earning a Splunk Certification at .conf24. We're enticing you again this ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Now On-Demand Join us to learn more about how you can leverage Service Level Objectives (SLOs) and the new ...

Database Performance Sidebar Panel Now on APM Database Query Performance & Service ...

We’ve streamlined the troubleshooting experience for database-related service issues by adding a database ...