All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

EDIPI will NOT work per account formatting in your last reply.  You will definitely need PIV.   Have you tried to sign into Splunk via token using a non-admin account? In the web.conf help page, it... See more...
EDIPI will NOT work per account formatting in your last reply.  You will definitely need PIV.   Have you tried to sign into Splunk via token using a non-admin account? In the web.conf help page, it gives the different values you can use for certBasedUserAuthMethod.  PIV would be correct for you, but the certBasedUserAuthPivOidList may require a different value.  I would look at your CAC values and find the field/attribute that holds the value you need Splunk to read.  Per web.conf help page, https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Webconf PIV (Personal Identity Verification): Use PIV, a 16-digit numeric identifier typically formatted as xxxxxxxxxxxxxxxx@mil. It is extracted from an "Other Name" field in the Subject Alternate Name which corresponds to one of the object identifiers (OIDs) that you configure in 'certBasedUserAuthPivOidList'. Seems like the incorrect field is being read.  Look through your logs to see if it shows the value that is being read in and try to match that value up on your CAC. Otherwise, here is the full configuration for web.conf CAC authentication that I've had success with: [settings] requireClientCert = true sslRootCAPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert_chain_created.pem enableCertBasedUserAuth = true SSOMode = permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod = PIV certBasedUserAuthPivOidList = Microsoft Universal Principal Name allowSsoWithoutChangingServerConf = 1  
No dice. Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in client certificate ERROR UiAuth [2487972 TcpChannelThread]... See more...
No dice. Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in client certificate ERROR UiAuth [2487972 TcpChannelThread] - CertBasedUserAuth: error fetching username from client certificate   New error with EDIPI ERROR UiAuth [2488903 TcpChannelThread] - user=<DoDID#> action=login status=failure reason=sso-failed useragent=<browser stuff>
I'll give it go. PIV just seems like the way to go because my UPN is <myDoDID#>.ADMN@smil.mil. From everything I read, it made sense to use PIV plus OIDs (I can see multiple OIDs in my cert)
Have you tried replacing PIV with EDIPI?   certBasedUserAuthMethod = EDIPI  
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number ... See more...
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number of Linux UFs. For the most part I haven't found anything really odd about it. They are using self-signed certs that have several years of validity left on them. FTR, I am not a windows admin so I am kind of grasping at straws here. Both their 'nix and windows UFs use Splunk's Deployment Server for configuration. All UFs are using the same fwd_to_loghost and ssl_bundle apps, the only difference is windowsconf_global or linux_global apps, as appropriate (I have verified the correct app is installed). They made an attempt a year or so to get this working, with no success. I believe I've removed all trace of it and have removed and reinstalled the UF (using 9.4.1 this time) on the windows host from scratch. The windows box connects to the Deployment Server and downloads the apps (fwd_to_loghost, ssl_bundle, and windowsconf_global) correctly but when it tries to connect to the indexer to send logs it fails. The indexer says: ERROR TcpInputProc [2957596 FwdDataReceiverThread-0] - Error encountered for connection from src=[redacted, correct IP address]:49902. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol The windows box has some interesting things to say in C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log: 04-24-2025 14:03:59.924 -0700 INFO TcpOutputProc [2948 parsing] - Initializing connection for non-ssl forwarding to loghost.biostat.washington.edu:9997 ... 04-24-2025 14:03:59.940 -0700 INFO CertificateData [2948 parsing] - channel=Forwarder, subject="emailAddress=[redacted],CN=loghost-uf.biostat.washington.edu,OU=Biostatistics,O=University of Washington,L=Seattle,ST=Washington,C=US", subjectAltName="DNS:keller-uf, DNS:keller-uf.biostat.washington.edu, DNS:loghost-uf, DNS:loghost-uf.biostat.washington.edu", serial=10, notValidBefore=1623099653, notValidAfter=1938459653, issuer="/C=US/ST=Washington/L=Seattle/O=UW/OU=Biostatistics/CN=zwickel.biostat.washington.edu/emailAddress=bite@uw.edu", sha256-fingerprint=10:31:07:BF:21:F2:49:41:34:E4:53:7F:89:C0:CB:81:99:6E:16:00:29:3E:C4:BC:C3:88:A1:CC:92:D0:AD:32 ... 04-24-2025 14:04:00.362 -0700 WARN X509Verify [5944 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 04-24-2025 14:04:00.381 -0700 INFO CertificateData [5944 HTTPDispatch] - channel=HTTPServer, subject="O=SplunkUser,CN=SplunkServerDefaultCert", subjectAltName="", serial=9814D004673F8828, notValidBefore=1745011134, notValidAfter=1839619134, issuer="/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com", sha256-fingerprint=DC:75:CA:ED:54:2A:28:12:D4:A1:B9:DC:37:29:75:F4:9B:56:1F:A2:C7:33:BB:EB:EF:02:37:AC:6E:81:E4:CA I am not seeing anything in the log before the non-ssl line that appears to be an error, though it is a noisy log so it is quite possible I missed something. I have my working splunk configuration with functional Windows and Linux UFs that I am trying to base this work on. It does not have the non-ssl or SplunkServerDefaultCert log entries. I presume both are Bad Signs<tm>. Both my working system and this one have sslRootCAPath set in deployment-apps/fwd_to_loghost/default/outputs.conf: [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] compressed = true server = loghost.biostat.washington.edu:9997 clientCert = $SPLUNK_HOME/etc/apps/ssl_bundle/default/UF/loghost-uf-bundle.crt sslPassword = [redacted] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt sslVerifyServerCert = true neither of them [had] sslRootCAPath set anywhere else in deployment-apps. I've tried adding a deployment-apps/windowsconf_global/default/server.conf, though ConfigureSplunkforwardingtousesignedcertificates seems to say this is only needed for non-windows hosts: [sslConfig] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt but the "unknown protocol" errors and non-ssl and SplunkServerDefaultCert log entries persist. As I said, I'm not a windows admin but given the windows hosts in the working environment are fine with paths like "$SPLUNK_HOME/etc/apps/ssl_bundle/default/..." in outputs.conf and there is a reference to a clearly self-signed cert in the log I have to presume these path entries are valid and working so it should be finding both the cert and the bundle. I've looked at the output of btool server & btool outputs, comparing it with the working instance, and I don't see any obvious or glaring problems. The new server.conf entry shows up in the output of btool server list so it is being seen but not having any impact on the problem. I presume the "unknown protocol" is because the windows UF is trying to use a non-ssl connection, per the UF's log file entry. I've read (and re-read, and re-re-read) https://docs.splunk.com/Documentation/Splunk/9.4.1/Security/ConfigureSplunkforwardingtousesignedcertificates and several forum posts that seem to be about this kind of problem but so far nothing seems to have addressed it. I have to try not to break the linux UFs that are working so I have to be careful what files I touch in deployment-apps - I'm trying to limit myself to only modifying things in windowsconf_global when possible. Where should I look to try to resolve this problem? Given the Linux UFs are working fine I presume the problem is somewhere in the config for the Windows UF. Thanks in advance for any assistance.
crossposting: Splunk File Reader for "Malware_Detection_Logs" | Veeam Community Resource Hub
​ There is extra contextual data for the Malware Detection events that is needed in order to properly start an investigation into the alerts Some of the additional contextual elements needed are in... See more...
​ There is extra contextual data for the Malware Detection events that is needed in order to properly start an investigation into the alerts Some of the additional contextual elements needed are in the “C:\ProgramData\Veeam\Backup\Malware_Detection_Logs\” directory When setting up a SOC to respond to these alerts you need this information embedded directly into the Detection Alert from your SIEM This covers only the “Malware_Detection_Logs” directory, there are other contextual data sources needed that will be covered in another post First ensure baseline logging and dashboards are setup.  Veeam has actually done a very good job of making this easy  setup syslog for VBR Specifying Syslog Servers install Splunk app in your Splunk environment Veeam App for Splunk You then need to install a Splunk Universal Forwarder on the VBR server to collect the additional context data: Download Universal Forwarder for Remote Data Collection | Splunk create a custom Splunk file monitor input to collect data from the "Malware_Detection_Logs" directory (work with your Splunk admin if you need help) Now you are ready to correlate the Malware Detection events with their contextual data in a custom Detection Alert for your SOC: only the “ObjectName” field is present in both the Malware Detection event and the log files you need to use the timestamp and the ObjectName to correlate these data sources together THERE IS A LOT OF NUANCE IN THE <snip>ed SECTION BELOW (I can help if you need it, but it was too much to post and needed to be sanitized anyway) You will now see contextual data embedded directly into the SOC alert (we were doing some testing and set .txt and .pdf as bad file extensions just to generate data
Anyone, anyone? Bueller? I feel like I'm so close to making this work as well. SSL/TLS is configured, Splunk Web GUI prompts for PIV token + PIN, but it fails out to some "ERROR: Unauthorized" xml... See more...
Anyone, anyone? Bueller? I feel like I'm so close to making this work as well. SSL/TLS is configured, Splunk Web GUI prompts for PIV token + PIN, but it fails out to some "ERROR: Unauthorized" xml garbage in the browser.  Tailing splunkd for CertBasedUserAuth reveals: error fetching username from client certificate.  Relevant Settings:  certBasedUserAuthMethod = PIV certBasedUserPivOidList = 1.3.6.1.4.1.311.20.2.3,Microsoft Universal Principal Name   Any ideas? 
Quick question  so lets say we use your query. When muted on the day of lets say 4/25 and there was an event that happened that day does the alert the say: there no results that return therefore ... See more...
Quick question  so lets say we use your query. When muted on the day of lets say 4/25 and there was an event that happened that day does the alert the say: there no results that return therefore it will not fire the alert. I am trying to figure why my alert fired on of the the dates that my lookup table has chose to mute. this is my alert settings cron schedule to search  0 6 * * 1-5  so its monday-friday but yet the alert fired on a day that it was suppose to mute. I was wondering could the trigger condition be the root cause? since there was no results returned and so the trigger alert came to a conclusion no results is also != 1      
Sorry to resurrect this old thread, but I ran across this post in my search to solve my problem and thought I'd share the solution.  Here's the error in the log: /opt/splunk/var/log/splunk/ta_t... See more...
Sorry to resurrect this old thread, but I ran across this post in my search to solve my problem and thought I'd share the solution.  Here's the error in the log: /opt/splunk/var/log/splunk/ta_tenable_tenable_securitycenter.log requests.exceptions.SSLError: HTTPSConnectionPool(host='mytenableserver.mycompany.local', port=443): Max retries exceeded with url: /rest/system (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1161)'))) There used to be a UI option in the Addon to disable certificate verification, but apparently this functionality was removed by Splunk, so to fix it there are 2 options described below in case the Tenable KB article changes in the future.  Note that any update to the Addon would overwrite these changes, so you'll have to do them each time you update it. Best Option:  Append your custom CA cert to the one in the certifi folder.  Change the script to reference your CA certificate chain file (text file with all the certificates BASE64 encoded). Command to append our CA to the certifi cacert.pem file su splunk cp /opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/certifi/cacert.pem /opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/certifi/cacert.pem.original cat /opt/splunk/etc/auth/mycerts/mycustom-ca-chain.pem >> /opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/certifi/cacert.pem Next option:  Disable SSL certificate verification (less secure). /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py verify_ssl_for_sc_api_key = False Source: https://docs.tenable.com/integrations/Splunk/Content/Splunk2/ConfigureTenablescCertificatesS2.htm
Hi @siv  If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it t... See more...
Hi @siv  If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it to a specific index on your Splunk indexers. Then, Create scheduled search which searches that index and retrieves the sent data and outputs it to a lookup (using | outputlookup command). Depending on how/when the CSV is updated may depend on exactly how the resulting search ends up, but ultimately this should be a viable solution. There may be other solutions but would require significantly more engineering effort.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb  I think you need to look at how this is deployed to each of the 15 HFs, ultimately you would have to make *something* different on one of them in order for it to know which one to run ... See more...
Hi @danielbb  I think you need to look at how this is deployed to each of the 15 HFs, ultimately you would have to make *something* different on one of them in order for it to know which one to run the input.  How are you deploying the app to the 15 HFs? Deployment Server? Ansible?  Each HF operates independently and not as part of a cluster, they arent aware of eachother and there is no leader or anything like that which could be used to determine a particular role.  If you are deploying via Ansible then you could use a templated inputs.conf to toggle the disabled flag on the input, but it really depends on your architecture and deployment approach. Please let us know so we can help further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Sorry for the late response.  We opted to stay with using Apache as an SSL proxy to pass the user's credentials to Splunk.  
Hi @avikc100  You can create a search that calculates the relevant dates which set tokens and then use the tokens: <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%... See more...
Hi @avikc100  You can create a search that calculates the relevant dates which set tokens and then use the tokens: <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> Then use $dayMinusN$ for each Title - where N is the number of days, like this:   Below is the full XML example of that dashboard above for you to play with if it helps: <dashboard version="1.1" theme="light"> <label>SplunkAnswers1</label> <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> <search id="baseTest"> <query>|tstats count where index=_internal by _time, host span=1d | eval daysAgo=floor((now()-_time)/86400)</query> <earliest>-7d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <row> <panel> <table> <title>$dayMinus0$</title> <search base="baseTest"> <query>| where daysAgo=0 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus1$</title> <search base="baseTest"> <query>| where daysAgo=1 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus2$</title> <search base="baseTest"> <query>| where daysAgo=2 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus3$</title> <search base="baseTest"> <query>| where daysAgo=3 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus4$</title> <search base="baseTest"> <query>| where daysAgo=4 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus5$</title> <search base="baseTest"> <query>| where daysAgo=5 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I want to replace hard coded text "Today" by current system date in splunk report. Please help if it is possible. Please see the attachment.
♪♫♬ And they say that a hero could saaaaaaave us I'm not gonna stand here and waaaaaaaait ♪♫♬
@hemant_lnu wrote: We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog ... See more...
@hemant_lnu wrote: We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [Linux_os_syslog] TIME_PREFIX = ^ Tells Splunk to look for the event timestamp at the beginning of the event TIME_FORMAT = %b %d %H:%M:%S Tells Splunk what a timestamp looks like MAX_TIMESTAMP_LOOKAHEAD = 15 How far from TIME_PREFIX the timestamp is allowed to be SHOULD_LINEMERGE = false Don't combine lines LINE_BREAKER = ([\r\n]+) Events break after a newline (CR and/or LF) TRUNCATE = 2048 Cut off each event after 2048 characters TZ = US/Eastern Event timestamps are expected to be in this time zone Transforms.conf [linux_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit Look for "type=", some text followed by white space, then "msg=audit".  If it's found, set the sourcetype field to "linux:audit" [auditd_node] REGEX = \snode=(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host Look for "node=" in each event and set the 'host' field to the word that follows it.
Nope. If you're pushing an app with enabled input to 15 forwarders you're getting an enabled input on each of them. The typical way to handle it is to define the input as disabled within the main app... See more...
Nope. If you're pushing an app with enabled input to 15 forwarders you're getting an enabled input on each of them. The typical way to handle it is to define the input as disabled within the main app and push it to all forwarders and create a small app which overwrites input's state to enabled and push this app to just one forwarder.
For putting values in a nice xy-table you can use either chart command or xyseries but... You have only X and Y. You don't have values which you'd put into the table.
I can change the return, and the time. I just need a syntax to create a table where y=time and X=saml user