All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  MuS, Thanks for the response.  I am going to take this and work with what I have.  As I put this in my search, found out that my test data is different then what my _raw data actually is.  The ... See more...
  MuS, Thanks for the response.  I am going to take this and work with what I have.  As I put this in my search, found out that my test data is different then what my _raw data actually is.  The username field from  printserver index is "username" but my username field from my printlogs is "User_Name" but has a domain name is front of it.   index=prntserver _time,                                   prnt_name     username   location 2024-11-04 11:05:32    Printer1           jon.doe         Office 2024-11-04 12:20:56    Printer2           tim.allen       FrontDesk   I have an index getting data from our DLP software that contains the following data:    index=printlogs _time                                    Users_Name     directory                          file 2024-11-04 11:05:33    cpn/jon.doe             c:/desktop/prints/     document1.doc 2024-11-04 12:20:58    tim.allen  c:/documents/files/   document2.xlsx I am going ot Rex the User_Name field from my print logs to match it with my printserver logs.  This is what I am going to work with and see if I get the results I need.  Thank you for your insight.  index=printserver OR index=printlogs | rex field="User_Name" "(?<domain>\S)+\\\\(?<username>\S+)" | bin _time span=3s | stats values(*) AS * by _time username | table _time prnt_name username location directory file  
Have you tried to consolidate then via stats command and then configure your alert to trigger for each result and tokenize the email parameter? Try this (adjust to your reality): <your search> ... See more...
Have you tried to consolidate then via stats command and then configure your alert to trigger for each result and tokenize the email parameter? Try this (adjust to your reality): <your search> | stats values(event_field) as events by user, email Then in your alert configuration, set trigger conditions: Number of results > 0 Trigger: For each result And add the email action with To with token $result.email$ That will make each email receive their group of events Give it a try and let me know 
Hi @hazem., Is this [DDMMYYYY] just a placeholder for an actual date in this example or this is the literal string being monitored in the monitor stanza and also the literal text in the filename? ... See more...
Hi @hazem., Is this [DDMMYYYY] just a placeholder for an actual date in this example or this is the literal string being monitored in the monitor stanza and also the literal text in the filename? I ask that because if what you wanna do is to monitor C:\Program Files (x86)\dir1\log\name_CRT_<any date>.log then you can use * at that part like: C:\Program Files (x86)\dir1\log\name_CRT_*.log This way the monitor stanza will know what to do. Anyways, always make sure that in order for the forwarder to proper monitor something, that file must have the right read permissions to be read. Usually some applications under Program Files may be locked to administrators and that may cause SplunkForwarder service not to have the right permission to read the particular log. A good indication for that is to check the _internal index for logs related to that and see if they are logging Access Denied somewhere. The below search may give you some heads up on hits; Restart splunk forwarder and keep eyes on that log for last 5 min range or something as forwarder will evaluate the monitors at the startup and you'll find it easier. index=_internal host=<my_forwarder_host> "C:\Program Files (x86)\dir1\log\"
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release n... See more...
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release notes - Splunk Add-on for Palo Alto Networks is empty. Anyone here to help me out with this and provide the 3rd party notice information? Best regards! Matthias
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  ... See more...
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  [monitor://C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log sourcetype = mylog:auditlog disabled = 0 index=test   any help please
We are using Splunk forwarder v9.0.3. We would like to have Splunk forwarder to reject the TLS server certificate if path length basic constraint condition fails. We generated the TLS server certifi... See more...
We are using Splunk forwarder v9.0.3. We would like to have Splunk forwarder to reject the TLS server certificate if path length basic constraint condition fails. We generated the TLS server certificate with pathlen as 0 in "root CA" and chain is "root CA -> intermediate CA -> server certificate".  As "root CA" pathlen is 0, no intermediate CA should be present. But, forwarder accepting the chain "root CA -> intermediate CA -> server certificate". Is this a known limitation or does it require a configuration change to basic constraint validation on path length? Please advise. Below is our outputs.conf contents. [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true  
I feel if we can first group the events on email and then use the email as a token in the email recipient , we can do it . But Im not getting how we can do that.
I don't understand. You want to send different set of results to different people as a single alert action? No can do. You could try using https://splunkbase.splunk.com/app/1794
Hi @smanojkumar   Then you can solve it with that query ? if it helpful maybe you can mark as solve and will be appreciate if give me karma. because if you mark it as solve, it will help for ano... See more...
Hi @smanojkumar   Then you can solve it with that query ? if it helpful maybe you can mark as solve and will be appreciate if give me karma. because if you mark it as solve, it will help for another user who have same problem
PLEASE stop regurgitating LLM responses without checking. It's not helpful.
The most typical reason for truncation when using syslog is when you're sending events over UDP and you're hitting datagram size limit. Do you receive syslog on your "syslog server" and write events... See more...
The most typical reason for truncation when using syslog is when you're sending events over UDP and you're hitting datagram size limit. Do you receive syslog on your "syslog server" and write events to file(s) from which you pick them up with an UF? If so, check the contents of the intermediate file(s). If the events are truncated there, it's a problem on the syslog side. If the events are OK there but are truncated after ingesting with UF, it's on UF's (or indexer's) side. If it's the syslog side, you can try to switch to TCP.
Can't help you here beyond advising again to check the docs. Haven't dealt with ePO for several years now. If by "logs aren't received in full" you mean that events are truncated, you're probably tr... See more...
Can't help you here beyond advising again to check the docs. Haven't dealt with ePO for several years now. If by "logs aren't received in full" you mean that events are truncated, you're probably trying to send them over UDP and then you are limited by the maximum UDP datagram length. Switch to TCP (again - as far as I remember, ePO requires TLS encryption over TCP so it might be a little more tricky to configure) and you're all set.
Hi @splunklearner , I suppose that you're using the standard add-on from Splunkbase (https://splunkbase.splunk.com/app/2846), otherwise, use it. Check if in the logs are truncated or divided in two... See more...
Hi @splunklearner , I suppose that you're using the standard add-on from Splunkbase (https://splunkbase.splunk.com/app/2846), otherwise, use it. Check if in the logs are truncated or divided in two events. If truncated, check the TRUNCATE option for that sourcetype. If divided, check if in the logs there's some date. Ciao. Giuseppe
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=... See more...
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=xxxxxxxxxxxxx(say 50 words) after on-boarding to Splunk: Violation_details=xxxxx (truncating) What might be the issue here? Syslog server -- UF -- Indexer (our flow)
Thank you for the information! Currently, I'm receiving logs from the ePO server via Syslog, but the logs aren’t being received in full. To improve this, I’m considering using the ePO API for mo... See more...
Thank you for the information! Currently, I'm receiving logs from the ePO server via Syslog, but the logs aren’t being received in full. To improve this, I’m considering using the ePO API for more reliable log collection. Could you guide me on how to configure log ingestion from the ePO server using its API instead of Syslog? I would appreciate details on: Steps for setting up ePO API integration with Splunk Any authentication requirements or best practices for secure data transfer Example scripts or configurations, if available Thank you in advance for any guidance!
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with t... See more...
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with the csv attachment of the results. Example:- abc@email has around 80 events in the table , I want to send only one alert to abc with all the 80 events in it as csv attachment. And there are around 85+ emails in my data , and they have to be grouped using only 1 spl and it should be used in alert. Note :- dont suggest $result.field$  or stats to group its not useful for me. Thank you
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server aut... See more...
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server authentication.  We generated the TLS server certificate without the ExtendedKeyUsage to test this use case. However, Splunk forwarder is still accepting the TLS server certificate. Ideally, it should allow only when ExtendedKeyUsage is set to Server authentication. Is this a known limitation or does it require a configuration change to perform this EKU validation? Please advise. Below is our outputs.conf contents.   [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true  
If you will use "Global Account" from Component Library then you should be able to access Account data like this:   tenant_data = helper.get_arg('tenant')   Where 'tenant' is Global Account compo... See more...
If you will use "Global Account" from Component Library then you should be able to access Account data like this:   tenant_data = helper.get_arg('tenant')   Where 'tenant' is Global Account component name's   As result variable tenant_data will be initiated with dictionary with following keys: name, username and password for specific account so you will be able to use username and password keys e.g. for authentication
Settings -> Distributed Environment -> Distributed Search -> Search Peers -> Add New As I said before - for SHC you only need to add the CM, the indexers should populate automatically. The rest of t... See more...
Settings -> Distributed Environment -> Distributed Search -> Search Peers -> Add New As I said before - for SHC you only need to add the CM, the indexers should populate automatically. The rest of the components you need to add one by one. Then in the distributed monitoring console you'll have to set up roles for each of those components.
Nope. Can you provide me the guidelines to add it.