FIELDALIAS behavior change in 7.2:
FIELDALIAS for a specific field overwrites the field value, regardless of whether it is NULL or not. In earlier versions, FIELDALIAS did NOT overwrite the value in case of NULL.
Solution to this is to use COALESCE:
FIELDALIAS-s_computername = s_computername as host
should be something like
EVAL-host=coalesce(s_computername, host)
... View more
This is a known issue in the addon which stems from Checkpoint OPSEC SDK only working with 32-bit OS flavors: http://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Releasenotes
OPSEC SDK is no longer maintained and Checkpoint recommends Log Exporter instead (which is based on syslog integration and thus avoids OPSEC all together): https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323
... View more
This is not a best practice per ce, more a list of backward-compatibility risk assessment considerations:
does the add-on have any custom modular inputs? (check the bin folder in the app directory)
does the add-on have custom UI?
If the answer is "yes" the risk of installing on newer Splunk is higher and more rigorous testing is warranted. Many addons, however, such as Juniper have only Splunk configurations (.conf files) and these features stay quite stable as we move from version to version of Splunk enterprise. **This may change in the future.
If above is too generic, it's always a good practice to install the add-on on a staging environment before attempting a production upgrade and many add-ons should just work.
As for the specific add-ons in this question:
- Juniper is very close to a certified release
- Infoblox is on the list of items to be worked on in the next 2 months
- Recommended and supported solution for OKTA ingestion is now developed by OKTA and available here: https://splunkbase.splunk.com/app/3682/
... View more
Integration works as follows: when incident data hits SNOW, it is first entered into an interstitial table "Splunk Incident".
Therefore to make this work you will need to adjust that table definition on the SNOW side. (This is part of the "Splunk Integration" SNOW app.
Then you will need to change a few files, depending on the type of action you want to use (alert has custom UI, for example).
With the above said, let me ask you this:
- can you include this data in the description field?
- can you set these fields using custom workflow on the SNOW side?
... View more
I believe it should, the key is to include "kms:Decrypt" in the permission policy:
http://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureAWSpermissions
... View more
Your props look all mangled. Here is my copy:
[snow:em_event]
EXTRACT-snow_em_event_alert = alert="(?P<snow_em_event_alert>[^\s"]*)"
# For display_value = all, uncomment the following.
#FIELDALIAS-severity_name = dv_severity AS severity_name
# For display_value = all, comment the following.
LOOKUP-severity_name = severity_lookup severity AS severity OUTPUTNEW severity_name AS severity_name
I would suggest reinstalling the addon.
... View more
ALB is not yet supported, but it is coming in a future version
Incremental currently supports only 4 different types of data, so setting the sourcetype to something unsupported will not help, unless you create the props/transforms by yourself
... View more
there are a couple of things to consider
- having a rising column will actually add "order by" clause to your query, so not sure why you were experiencing this
- for catching up on historical data, you can do a batch input to ingest everything and then create a new rising column input to stay up to date
- timestamps are not recommended for rising columns: https://answers.splunk.com/answers/400221/why-is-using-a-timestamp-column-for-the-rising-col.html
... View more
that's great, but there is still a bug in the addon, or rather a behavior we want to support - would be good to get that sample anyway, even if there is a customization that serves as a workaround.
Also, hopefully you added this in local/props.conf, because if this is in /default/props, it WILL get overwritten on upgrade.
... View more
timestamp extraction is not handled through input, but rather native Splunk behavior or props/transforms override. For vpcflow logs it's the former.
The props for this sourcetype extract in the following format:
[aws:cloudwatchlogs:vpcflow]
EXTRACT-all=^\s*(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)\s+(?P[^\s]+)
This is the format it expects:
2 000000000000 eni-00000000 #ipv4-1# #ipv4-2# #port-1# #port-2# #protocol# #packets# #bytes# #timestamp# #timestamp# #action# OK
It seems Splunk is getting confused about the timestamp in your vpcflow events: instead of grabbing the start_time, it finds the account which also looks like a timestamp.
This does seem like a bug, could you share a sample event for me to try?
... View more
"The "deployment-apps" directory is a link to the "apps" directory." ---> this is why: apps are updated with local metadata and other local settings. deployment-apps are for distributing. These should not be shared.
... View more
you are not really giving enough information to troubleshoot. There is no maxevent setting.. look for errors in index=_internal, make sure the time range is the same and you are actually comparing apples with apples.
... View more
The config looks right, so this is probably an issue with the OPSEC app configuration.
Check for some ideas here http://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Troubleshoot (specifically the checkpoint URL: http://dl3.checkpoint.com/paid/20/How-To-Troubleshoot-SIC-related-Issues.pdf?HashKey=1463490738_979d1a6f694300a70576eecfe9d55b85&xtn=.pdf)
... View more
hmm... it should work if it is proper JSON throughout. This is the first question to answer. If not, then yea, you are in a pickle.
either way, it makes sense to start from Kinesis, because at least it handles the JSON wrapper for you.
Send me a sample and I can try it. (I am assuming the sample above is not how your data looked like coming in; I am specifically interested in the back slashes)
... View more