Hello,
I have the following query to search Proofpoint logs.
index=ppoint_prod host=*host1*
| eval time=strftime(_time, "%m-%d-%y %T")
| rex "env_from\s+value=(?<sender>\S+)"
| rex "env_rcpt\s+r...
See more...
Hello,
I have the following query to search Proofpoint logs.
index=ppoint_prod host=*host1*
| eval time=strftime(_time, "%m-%d-%y %T")
| rex "env_from\s+value=(?<sender>\S+)"
| rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)"
| stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s
| where Sender!=""
It provides what I need on a per message level. How would I modify this to get list of ConnectingIP and ReverseLookup values per Sender. If possible it would be nice to also get number of messages per sender, but it is not absolutely neccessarry. I understand I will need to drop from the query everything that is message specific like Subject, NumberOfAttachments etc.
I am looking to get something like this:
sender1@domain.com
ConnectingIP_1
ReverseLookup_1
ConnectingIP_2
ReverseLookup_2
sender2@domain.com
ConnectingIP_3
ReverseLookup_3
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or te...
See more...
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or team responsible or whatever) for the same input.
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local But it makes no sense to have both: mas...
See more...
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local But it makes no sense to have both: master-apps/_cluster/<yourapp>/local ref: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations Also one other thing came to mind: You are using the old name "master-apps" rather than the new name "manager-apps". This is fine as long as all your apps are placed either in master-apps or manager-apps, but your apps should not be in both folders.
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza ...
See more...
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza but if the number of input stanzas are low then it is feasible to add _meta entries for each.
What do you mean by "we go to the Data inputs => HTTP Event Collector at indexer Side"? You have WebUI enabled on your clustered indexers? It's a big no-no.
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. ...
See more...
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. I'm also not sure what _meta will do on the splunktcp input especially when handling an input stream already containing metadata fields.
Can you post how your _meta field was configured? It should be in inputs.conf and have the format:
_meta = fieldname::fieldvalue
So if you have two heavy forwarders, one can have an input with:
...
See more...
Can you post how your _meta field was configured? It should be in inputs.conf and have the format:
_meta = fieldname::fieldvalue
So if you have two heavy forwarders, one can have an input with:
_meta = meta_hfnum::1
and the other:
_meta = meta_hfnum::2
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics...
See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually. Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ? Thanks
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/loc...
See more...
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/local or http_input/local.
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be...
See more...
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be: curl -k -u admin:Password -X POST http://127.0.0.1:8089/services/authorization/tokens?output_mode=json --data name=admin --data audience=Users --data-urlencode expires_on=+30d I also suggest using https:// instead of http:// . You don't want your token to be visible in plaintext over the network.
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you coul...
See more...
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you could use this: ^(?<field1>\S+)\s*(?<field2>\S+)?\s(?<field3>\S+)$
probably an easy one, i have two events as follows thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ...
See more...
probably an easy one, i have two events as follows thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1 field2 field3 thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3
Hello,
I want to create Input: HEC on the indexers => Indexer Cluster.
Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local:
[http]
disabled=0
enableSSL=0
[ht...
See more...
Hello,
I want to create Input: HEC on the indexers => Indexer Cluster.
Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local:
[http]
disabled=0
enableSSL=0
[http://hec-input]
disabled=0
enableSSL=0
#useACK=true
index=HEC
source=HEC_Source
sourcetype=_json
token=2f5c143f-b777-4777-b2cc-ea45a4288677
Push these configuration to the peer-app (Indexers).
But we go to the Data inputs => HTTP Event Collector at indexer Side
we still found it as below:
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd gues...
See more...
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd guess that for one reason or another the forwarders fails to break the input stream into small enough chunks before sending it downstream.