Thanks, this is a big help. I will work this into my query. What has been happening is fields that are duplicated just come into the _raw data flow in Splunk. So many good answers. I appreciate every one.
... View more
Hey, Ed.
It sounds like you're monitoring a local directory on a syslog server. Try creating a local/inputs.conf file with the monitor stanza, and only assign sourcetype = syslog:
[monitor:///opt/logs/all_logs]
diabled = false
sourcetype = syslog
Also - make sure the hostname in the ASA is configured correct, as well as this command:
asa(config)#logging device-id hostname
The Add-on should pull out the hostname accurately. This worked for me. I didn't edit transforms or props. Let me know if it works!
Ed (as well)
... View more
That is odd since it works fine for me. Is this a single server install? https://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume
... View more
Hi Hexx,
I'm getting in the automatic lookups page when I add a new and running in the 6.2.1 version.
Can this works in this version as well.
... View more
Have you tried using the Splunk App for Stream? You could configure it on the server that mounts the directories, and specifically monitor any port and protocol required. Events will be collected from the wire and logged as JSON events into Splunk.
... View more
I have the same problem. I want to store frozen bucket in S3 in CSV format and I'm using shuttl to do this. The problem is that csv file stored on S3 is not a text file separated by commas but something else not human readable. All csv files relative to every bucket have the same size: 21 Bytes and the same content. Csv files are nested in a path of this type:
s3://S3_BUCKETNAME//archivePath/archive_data/clusterName/serverName/INDEX/BUCKET_NAME/BUCKET_NAME.csv
In addition to these files on the root of the bucket I found files like block_9139990103400054340 but they are human unreadable.
However if I try to restore s3 data with shuttl interface it works and I found the correct data in splunk.
So, archive and restore of Splunk frozen data with shuttl works but csv data stored on S3 are unreadable with other tool. Is it normal? I'm doing something wrong?
... View more
Disabling the command isn't going to make it work.
Run a search like this:
source="*www-access_log*"
and see if there is a field containing the client's ip. Then use that field name in the geoip call.
If there's no field yet, post some sample events and we'll help you extract the field.
... View more
The minimum SideviewUtils version required by SoS is 1.1.7, not 1.7 - getting 3.x with the free internal use license doesn't hurt though.
... View more
Yep, I uploaded it yesterday, am an awaiting approval. There will be 2 components, the add on that has communicates with EMC CEE API, and the app which contains all the lookup tables, field extractions, etc.
... View more
You can get creative and run another forwarder on different ports on the same system when there is a scenario as you described. It helps eliminate that bottleneck where you can dedicate that forwarder to just that one heavy sourcetype and adjust that specific limits.conf for that.
... View more
Yes i am getting several logs.... we have enabled logs from our firewall & F5. so i wanted to separate them. if i run sourcetype=syslog or source="udp:514" i get both device logs... how can i separate the logs?
... View more
Hi,
Same issue here, but changing the PROPS.conf didn't help.
Tried to add the MAX_DAYS_AGO in the [DEFAULT], or even in each stanza.
Additional info, I'm working with a Heavy Forwarded.
Thanks
... View more
So I figured out how to do do this based on a response to another similar question.
my_search | stats count by signature | eval signature_slice = "Events: " + count + ", " + signature | fields signature_slice, count
... View more
I have the same issue. PFSense and Snort all in the same pfsense index. Im assuming I have to install the forwarder on my pfsense box to forward those logs, and not dump the snort logs into syslog.
... View more