Getting Data In

Tenable Json logs are not filtering

Dilsheer_P
Loves-to-Learn Lots

Hi

I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue,

Sample log is given below

 

{ [-] 

SC_address: X.xx.xx
acceptRisk: false
acceptRiskRuleComment:
acrScore:
assetExposureScore:
baseScore:
bid:
checkType: summary
cpe:
custom_severity: false
cve:
cvssV3BaseScore:
cvssV3TemporalScore:
cvssV3Vector:
cvssVector:
description: This plugin displays, for each tested host, information about the scan itself :

- The version of the plugin set.
- The type of scanner (Nessus or Nessus Home).
- The version of the Nessus Engine.
- The port scanner(s) used.
- The port range scanned.
- The ping round trip time
- Whether credentialed or third-party patch management checks are possible.
- Whether the display of superseded patches is enabled
- The date of the scan.
- The duration of the scan.
- The number of hosts scanned in parallel.
- The number of checks done in parallel.
dnsName: xxxx.xx.xx
exploitAvailable: No
exploitEase:
exploitFrameworks:
family: { [+]
}
firstSeen: X
hasBeenMitigated: false
hostUUID:
hostUniqueness: repositoryID,ip,dnsName
ip: x.x.x.x
ips: x.x.x.x
keyDrivers:
lastSeen: x
macAddress:
netbiosName: x\x
operatingSystem: Microsoft Windows Server X X X X
patchPubDate: -1
pluginID: 19506
pluginInfo: 19506 (0/6) Nessus Scan Information
pluginModDate: X
pluginName: Nessus Scan Information
pluginPubDate: xx
pluginText: <plugin_output>Information about this scan :

Nessus version : 10.8.3
Nessus build : 20010
Plugin feed version : XX
Scanner edition used : X
Scanner OS : X
Scanner distribution : X-X-X
Scan type : Normal
Scan name : ABCSCAN
Scan policy used : x-161b-x-x-x-x/Internal Scanner 02 - Scan Policy (Windows & Linux)
Scanner IP : x.x.x.x
Port scanner(s) : nessus_syn_scanner
Port range : 1-5
Ping RTT : 14.438 ms
Thorough tests : no
Experimental tests : no
Scan for Unpatched Vulnerabilities : no
Plugin debugging enabled : no
Paranoia level : 1
Report verbosity : 1
Safe checks : yes
Optimize the test : yes
Credentialed checks : no
Patch management checks : None
Display superseded patches : no (supersedence plugin did not launch)
CGI scanning : disabled
Web application tests : disabled
Max hosts : 30
Max checks : 5
Recv timeout : 5
Backports : None
Allow post-scan editing : Yes
Nessus Plugin Signature Checking : Enabled
Audit File Signature Checking : Disabled
Scan Start Date : x/x/x x
Scan duration : X sec
Scan for malware : no
</plugin_output>
plugin_id: xx
port: 0
protocol: TCP
recastRisk: false
recastRiskRuleComment:
repository: { [+]
}
riskFactor: None
sc_uniqueness: x_x.x.x.x_xxxx.xx.xx
seeAlso:
seolDate: -1
severity: informational
severity_description: Informative
severity_id: 0
solution:
state: open
stigSeverity:
synopsis: This plugin displays information about the Nessus scan.
temporalScore:
uniqueness: repositoryID,ip,dnsName
uuid: x-x-x-xx-xxx
vendor_severity: Info
version: 1.127
vprContext: []
vprScore:
vulnPubDate: -1
vulnUUID:
vulnUniqueness: repositoryID,ip,port,protocol,pluginID
xref:
}

 

in props.conf

[tenable:sc:vuln]
TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs

transforms.conf

[tenable_remove_logs]
SOURCE_KEY = _raw
REGEX = ABCSCAN
DEST_KEY = queue
FORMAT = nullQueue

It is not working. Any solution ?. i have removed  SOURCE_KEY later , that is also not working

 

 

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. Please describe your ingestion process.

Where do the events come from? How are they received/pulled? On which component?

Where does the event stream go to from there? What components are involved and in what order?

Where are you putting your settings?

0 Karma

dural_yyz
Builder

in props.conf

[tenable:sc:vuln]
TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs

transforms.conf

[tenable_remove_logs]
SOURCE_KEY = _raw
REGEX = ABCSCAN
DEST_KEY = queue
FORMAT = nullQueue

 

 


Do you have any other TRANSFORMS-<class> or REPORTS-<class> statements in this props?  The order of processing could be creating issues.  I'm throwing hail marys since I'm at a loss.

0 Karma

PaulPanther
Motivator

Where you have applied these settings? On an indexer or on a Heavy Forwarder?

0 Karma

Dilsheer_P
Loves-to-Learn Lots

in Heavy Forwarder

0 Karma

PaulPanther
Motivator

Okay, and I guess the data is pulled via the tenable API, right?

Could you try:

transforms.conf

[tenable_remove_logs]
REGEX = (?m)(ABCSCAN)
DEST_KEY = queue
FORMAT = nullQueue

 If it is not working I would increase the depth_limit for testing.

0 Karma

Dilsheer_P
Loves-to-Learn Lots

I tried this as well and increased the depth_limit as well in limits.conf on HF under tenable addon local directory. still not working

[rex]

depth_limit=10000

my character limit is 9450 character total in an event. Still not

0 Karma

PaulPanther
Motivator

Parameter DEPTH_LIMIT must be set in transforms.conf 

DEPTH_LIMIT = <integer>
* Only set in transforms.conf for REPORT and TRANSFORMS field extractions.
   For EXTRACT type field extractions, set this in props.conf.
* Optional. Limits the amount of resources that are spent by PCRE
  when running patterns that do not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
  function, match(). If set too low, PCRE might fail to correctly match a
  pattern.
* Default: 1000

transforms.conf - Splunk Documentation

0 Karma

dural_yyz
Builder

If you are not familiar with changing depth_limit check out this material.

https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf

depth_limit = <integer>
* Limits the amount of resources that are spent by PCRE
  when running patterns that will not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
  function, match(). If set too low, PCRE might fail to correctly match
  a pattern.
* Default: 1000

Your match is 1500+ characters into your event.  I know you have sanitized it so you need to check your true data to get the right count.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's not it.

Dilsheer_P
Loves-to-Learn Lots

I did the changes, it is not working. Still data in indexing

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...