All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, other apps can be installed with ES to help improve your security monitoring abilities.  A common one is Splunk Security Essentials, but there are many others.  Go to apps.splunk.com to see what... See more...
Yes, other apps can be installed with ES to help improve your security monitoring abilities.  A common one is Splunk Security Essentials, but there are many others.  Go to apps.splunk.com to see what is available and choose those that support the products you need to monitor. Use caution when installing apps on your ES SH because ES uses a lot of resources.  Apps that don't contribute directly to your ES use cases should be installed on a separate SH. ITSI is not a security product and should be installed on its own SH.
@splunklearner wrote: I am unable to receive those syslog in forwarder or indexer. Why not?  What errors do you see? Sending syslog directly to a Splunk process is not good practice.  Syslog e... See more...
@splunklearner wrote: I am unable to receive those syslog in forwarder or indexer. Why not?  What errors do you see? Sending syslog directly to a Splunk process is not good practice.  Syslog events should be sent to a dedicated syslog server (like rsyslog or syslog-ng) and saved to disk.  Then have a Splunk Universal Forwarder monitor those disk files.
@mattymo Do the answers to your questions affect the logic of your previous suggestions? If possible, could you please clarify?   I would probably: - back up the apps and kvstore if needed - bu... See more...
@mattymo Do the answers to your questions affect the logic of your previous suggestions? If possible, could you please clarify?   I would probably: - back up the apps and kvstore if needed - build the new SH/SHC in the cloud - restore configs - cut over DNS or point users in a uniform fashion to the new SH during a Maintenance Window - shut down the old SH. 
As I said, if you can't configure your input that it assigns _time automatically, you're limited to using INGEST_EVAL to find the timestamp within your event and then strptime it.
LTM as far as I know is not something you can "install on a syslog server". About  LTM you have to talk with your F5 specialist. Syslog ingestion can be relatively complicated thing. While for lab u... See more...
LTM as far as I know is not something you can "install on a syslog server". About  LTM you have to talk with your F5 specialist. Syslog ingestion can be relatively complicated thing. While for lab usage or some very small deployment you probably could get away with receiving events directly on TCP or UDP inputs on your UF it's not recommended for production use. You should use an external syslog receiver which either writes to files from which you pick up the events with monitor inputs or which sends the events to a HEC input on your HF or indexer. Loadbalancing syslog traffic is usually not a good idea. It's often better to just install a good syslog receiver as close to the source as possible.
@PickleRick Thanks for your wonderful suggestion in the shared doc link.  However, timestamp specification setting is only available in "Batch type" not available in "Rising Column Type". Is there an... See more...
@PickleRick Thanks for your wonderful suggestion in the shared doc link.  However, timestamp specification setting is only available in "Batch type" not available in "Rising Column Type". Is there any other suggestion or idea to apply this with rising column type also to avoid duplication ingestion of events?  
Hi Guys, Syslog is sent to forwarder IP through TCP 9523 port. I am unable to receive those syslog in forwarder or indexer. How to check whether syslog is received in forwarder ? How to receive th... See more...
Hi Guys, Syslog is sent to forwarder IP through TCP 9523 port. I am unable to receive those syslog in forwarder or indexer. How to check whether syslog is received in forwarder ? How to receive those syslog in indexer? Getting those logs from network device.
Hello ES Splunker,   I want to know if any applications can be installed to enhance the security posture alongside with Enterprise Security. is ITSI App added value for the security posture?  
My architecture: F5 devices sending logs to our syslog server and we have UF installed on syslog server to forward the data to our splunk. But client wants to install LTM on our syslog server becaus... See more...
My architecture: F5 devices sending logs to our syslog server and we have UF installed on syslog server to forward the data to our splunk. But client wants to install LTM on our syslog server because sometimes logs are not coming properly... We use UDP as of now. But recommended is TCP for them. I am not aware of syslog configuration at all.
Your questions are very vague and it's very hard to tell what you have at this moment and what you're trying to achieve. Be a bit more descriptive about what is your current architecture and what is... See more...
Your questions are very vague and it's very hard to tell what you have at this moment and what you're trying to achieve. Be a bit more descriptive about what is your current architecture and what is your goal. We can help with specific technical questions or can explain something that you don't understand from docs or something like that but community volunteers are not a substitution for proper support or professional services.
Hi @PickleRick , Can you brief more about LTM and how to configure it with syslog? We are receiving data from F5 devices only. And please help me with syslog configuration with Splunk latest doc link
Ha! So it's a modular input. With modular inputs time processing works a bit differently. See https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript You nee... See more...
Ha! So it's a modular input. With modular inputs time processing works a bit differently. See https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript You need to configure your database input properly https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Createandmanagedatabaseinputs or - if you can't find suitable combination of parameters - you need to use INGEST_EVAL to modify the _time field after initial parsing stages during ingestion.
@PickleRick  I am putting props setting under /app/local Note : Data is ingesting to Splunk from DB connect app. So I have applied all the props settings under /db_connect_ap/local
@yuanliuYou should normally not need to escape quotes. It's not a rex command in SPL. @uagraw01How are you ingesting your data and where do you put those props? (On which server?)
I have this same problem. If a container has multiple artifacts, for example 10, with the tagging duplicate actions are usually limited to 1-3 instead of 10. I haven't been able to find low level de... See more...
I have this same problem. If a container has multiple artifacts, for example 10, with the tagging duplicate actions are usually limited to 1-3 instead of 10. I haven't been able to find low level details about how the python scripts are executed at an interpreter/ingestion level, and I don't think it exists publicly, which is unfortunate because the power of the platform lies in being able to use python to efficiently process data.  The VPE makes this clunky. I spent 3-4years on Palo Alto's XSOAR as the primary engineer and for all its quirks, Palo Alto has produced way better documentation on their SOAR than Splunk (Palo Alto overhauled their documentation when they acquired Demisto).  I'm about a year into using Splunk SOAR, and for all the quirks I had to handle using Palo Alto's XSOAR I wish I could go back to it, maybe my opinion/preference will change, but unless Splunk produces better documentation and opens up to the public/community some lower level documentation I'm doubtful it will. Palo Alto's XSOAR has a feature called Pre-Processing rules which allows you to filter/dedup and transform data coming into the SOAR before playbook execution, I wish Splunk SOAR had something similar, that way ingestion/deduplication logic (if you can even call tagging "that") wouldn't be intermingled in the same area as the "OAR" logic of the playbook, and hopefully avoid race conditions. The problem with "Mulit-Value" lists is that it screws up pre-existing logic. Maybe I'm missing something, but that Option should be configurable in the in the Saved Search/Alert +Action  Splunk App for SOAR Export, so that it could be configured on a per alert basis. 6 Years ago I chose Demisto over Phantom working for a Fortune 300, if I could have my way right now I'd probably go with my first choice. P.S. to be fair to Splunk SOAR maybe there's some feature I'm overlooking.
No. CMC is pre-built and a far as I know there's no way to edit it from the user's level. Also, what would you want to "monitor" when you can't dispatch rest to indexers? If you want to just dig thro... See more...
No. CMC is pre-built and a far as I know there's no way to edit it from the user's level. Also, what would you want to "monitor" when you can't dispatch rest to indexers? If you want to just dig through the logs, you don't need CMC for that.
Interesting find. It's inconsistent with the docs so it calls for a support case or at least a docs feedback.
Hi @splunklearner , apply the checks that @dural_yyz hinted. In few words, check less the UF configuration and more the syslog configuration. Ciao. Giuseppe
Hi @esimon , use a regex (rex command) to extract the first part of the token. In othe words, if the token is "A-12345" and you want to use index="A-12345" and for the WHERE condition column="A", y... See more...
Hi @esimon , use a regex (rex command) to extract the first part of the token. In othe words, if the token is "A-12345" and you want to use index="A-12345" and for the WHERE condition column="A", you could try: index="$token$" | rex field="$token$" "^(?<my_field>[^-]*)" | where column="my_field" | ... But also the eval should run. Ciao. Giuseppe
LTM is an F5 product, not a part of Splunk environment. Also load-balancing syslog traffic can be a relatively complicated issue despite its initially perceived simplicity.