All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE  Many thanks to Pickle... See more...
Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE  Many thanks to PickleRick for providing the solution.
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all even... See more...
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all events from the index 'lts'; it cannot be undone. Are you sure you want to continue [y/n]? y Cleaning database lts. And restarted both the indexer and the forwarder. But no events are sent.
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certific... See more...
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certificate? You're making a server.pep file, but do you use that elsewhere in your setup? Also, in the second part of your post, you run an openssl verify on server.crt. Could you clarify where that file comes from? Is it the same one referenced in your sslconfig stanza, server.pem? Or is this another file entirely?  
You can't. Some things simply take time.
Hi @danielbb , as also the other said: you should have two different ServerClasses (if you have a Deployment Server) or two distribution lists if you use anotehr tool. I don't like the solution to ... See more...
Hi @danielbb , as also the other said: you should have two different ServerClasses (if you have a Deployment Server) or two distribution lists if you use anotehr tool. I don't like the solution to hardcode a rule in your script, because you have to remember this configuration for all the next time and to manage it! Ciao. Giuseppe
Hi @Cheng2Ready , you run your alert Monday-Friday, and you filter your results using the above search in this way you will not have results in those days so the alert will not fire. Ciao. Giuseppe
Hi @MatheoCaneva1 , if you have 3 Indexers (IDX), 1 Search Head (SH), 1 Heavy Forwarder (HF) and server with many roles: you should check this last one is also the Cluster Manager, in other words, i... See more...
Hi @MatheoCaneva1 , if you have 3 Indexers (IDX), 1 Search Head (SH), 1 Heavy Forwarder (HF) and server with many roles: you should check this last one is also the Cluster Manager, in other words, if you have an Indexer Cluster, even if it's strange that you don't know if you have it! You can check this accessing this server and viewing in [Settings > Indexer Cluster]: in this dashboard, you can see if you have an Indexer Cluster and its status. About the Search Head Cluster, you surely haven't it because you have only one SH (at least three SHs are required!). The SHCD is the Search Head Cluster Deployer, a machine delegated to manage Search Head Clusters, but you haven't a Search Head Cluster so you haven't it. Distributed Search isn't a Splunk role, probably you mean Deployment Server, to manage Forwarders and eventually Search Heads (if you haven't a Cluster). Summarizing: if you have an Indexer Cluster, you have to upgrade your servers following this order: Cluster Manager (that's also DS, LM, MC) SH IDX, HF UF If you haven't an Indexer Cluster: SH IDX, DS, LM, MC,  HF UF At least I hint to read this document that describes Splunk Architectures, to understand your one: https://docs.splunk.com/Documentation/SVA/current/Architectures/About  Ciao. Giuseppe
How can I shorten the time? I have more than 20 servers.
This is confusing: how do you get those "hard coded" text in the first place? In Splunk, the opposite is harder, rendering system date into English word strings. But if you got those strings in some... See more...
This is confusing: how do you get those "hard coded" text in the first place? In Splunk, the opposite is harder, rendering system date into English word strings. But if you got those strings in some dataset, you sure can "translate" them back.  Suppose your hard coded input is called hardcoded, this search will turn the string into systemdate: | eval decrement = case( hardcoded == "Today", 0, hardcoded == "Yesterday", 1, true(), replace(hardcoded, "Last (\d+).+", "\1") ) | eval systemdate = strftime(relative_time(now(), "-" . decrement . "day"), "%F") decrement hardcoded systemdate 0 Today 2025-04-24 1 Yesterday 2025-04-23 2 Last 2nd Day 2025-04-22 3 Last 3rd Day 2025-04-21 4 Last 4th Day 2025-04-20 5 Last 5th Day 2025-04-19 Here is a full emulation for you to play with and compare with real data. | makeresults format=csv data="hardcoded Today Yesterday Last 2nd Day Last 3rd Day Last 4th Day Last 5th Day" | eval decrement = case( hardcoded == "Today", 0, hardcoded == "Yesterday", 1, true(), replace(hardcoded, "Last (\d+).+", "\1") ) | eval systemdate = strftime(relative_time(now(), "-" . decrement . "day"), "%F")
If you are having the same issue next week, I will be in an environment that I can help better.  I'm currently going off of memory.  Please let me know if you still need help on Monday and I can help... See more...
If you are having the same issue next week, I will be in an environment that I can help better.  I'm currently going off of memory.  Please let me know if you still need help on Monday and I can help troubleshoot better.  Sorry I can't think of anything else to suggest right now.
Great catch. I noticed this as well and thought I had a smoking gun in 90Meter Smart Card Manager for my token. I noticed:  Extension Type: Subject Alternative Name        Oid: 2.5.29.17        ... See more...
Great catch. I noticed this as well and thought I had a smoking gun in 90Meter Smart Card Manager for my token. I noticed:  Extension Type: Subject Alternative Name        Oid: 2.5.29.17               Other Name: Principal Name=<DoD-ID>.ADMN@smil.mil I put that Oid into the web.conf, restarted, and got the same UiAuth erros
I just noticed one of your last posts showing the following errors: Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in cl... See more...
I just noticed one of your last posts showing the following errors: Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in client certificate ERROR UiAuth [2487972 TcpChannelThread] - CertBasedUserAuth: error fetching username from client certificate The PIV pulls from the Subject Alternate Name (SAN) "Other Name."  Validate on your PIV the value of Other Name in Subject Alternate Name.  I'm assuming the value you would like to pull is not found in that location.  You will need to find the OID value for the location on your PIV and change the certBasedUserAuthPivOidList values to match the correct location on your PIV.  Hope this helps.
Should be in the splunkd.log.  Here is an example from someone's previous post: 09-29-2023 09:02:43.191 -0400 INFO AuthenticationProviderLDAP [12404 TcpChannelThread] - Could not find user=" \x84\x... See more...
Should be in the splunkd.log.  Here is an example from someone's previous post: 09-29-2023 09:02:43.191 -0400 INFO AuthenticationProviderLDAP [12404 TcpChannelThread] - Could not find user=" \x84\x07\xd8\xb6\x05" with strategy="123_LDAP" 09-29-2023 09:02:43.192 -0400 ERROR HTTPAuthManager [12404 TcpChannelThread] - SSO failed - User does not exist: \x84\x07\xd8\xb6\x05  
Thanks for the quick reply! Which logs should I be parsing to find the value that is being read? Logs on the splunk server or the windows domain side?  So far, I've just been tailing the splunkd.l... See more...
Thanks for the quick reply! Which logs should I be parsing to find the value that is being read? Logs on the splunk server or the windows domain side?  So far, I've just been tailing the splunkd.log   
EDIPI will NOT work per account formatting in your last reply.  You will definitely need PIV.   Have you tried to sign into Splunk via token using a non-admin account? In the web.conf help page, it... See more...
EDIPI will NOT work per account formatting in your last reply.  You will definitely need PIV.   Have you tried to sign into Splunk via token using a non-admin account? In the web.conf help page, it gives the different values you can use for certBasedUserAuthMethod.  PIV would be correct for you, but the certBasedUserAuthPivOidList may require a different value.  I would look at your CAC values and find the field/attribute that holds the value you need Splunk to read.  Per web.conf help page, https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Webconf PIV (Personal Identity Verification): Use PIV, a 16-digit numeric identifier typically formatted as xxxxxxxxxxxxxxxx@mil. It is extracted from an "Other Name" field in the Subject Alternate Name which corresponds to one of the object identifiers (OIDs) that you configure in 'certBasedUserAuthPivOidList'. Seems like the incorrect field is being read.  Look through your logs to see if it shows the value that is being read in and try to match that value up on your CAC. Otherwise, here is the full configuration for web.conf CAC authentication that I've had success with: [settings] requireClientCert = true sslRootCAPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert_chain_created.pem enableCertBasedUserAuth = true SSOMode = permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod = PIV certBasedUserAuthPivOidList = Microsoft Universal Principal Name allowSsoWithoutChangingServerConf = 1  
No dice. Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in client certificate ERROR UiAuth [2487972 TcpChannelThread]... See more...
No dice. Previous errors with PIV/OID were  ERROR UiAuth [2487972 TcpChannelThread] -  SAN OtherName not found for configured OIDs in client certificate ERROR UiAuth [2487972 TcpChannelThread] - CertBasedUserAuth: error fetching username from client certificate   New error with EDIPI ERROR UiAuth [2488903 TcpChannelThread] - user=<DoDID#> action=login status=failure reason=sso-failed useragent=<browser stuff>
I'll give it go. PIV just seems like the way to go because my UPN is <myDoDID#>.ADMN@smil.mil. From everything I read, it made sense to use PIV plus OIDs (I can see multiple OIDs in my cert)
Have you tried replacing PIV with EDIPI?   certBasedUserAuthMethod = EDIPI  
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number ... See more...
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number of Linux UFs. For the most part I haven't found anything really odd about it. They are using self-signed certs that have several years of validity left on them. FTR, I am not a windows admin so I am kind of grasping at straws here. Both their 'nix and windows UFs use Splunk's Deployment Server for configuration. All UFs are using the same fwd_to_loghost and ssl_bundle apps, the only difference is windowsconf_global or linux_global apps, as appropriate (I have verified the correct app is installed). They made an attempt a year or so to get this working, with no success. I believe I've removed all trace of it and have removed and reinstalled the UF (using 9.4.1 this time) on the windows host from scratch. The windows box connects to the Deployment Server and downloads the apps (fwd_to_loghost, ssl_bundle, and windowsconf_global) correctly but when it tries to connect to the indexer to send logs it fails. The indexer says: ERROR TcpInputProc [2957596 FwdDataReceiverThread-0] - Error encountered for connection from src=[redacted, correct IP address]:49902. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol The windows box has some interesting things to say in C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log: 04-24-2025 14:03:59.924 -0700 INFO TcpOutputProc [2948 parsing] - Initializing connection for non-ssl forwarding to loghost.biostat.washington.edu:9997 ... 04-24-2025 14:03:59.940 -0700 INFO CertificateData [2948 parsing] - channel=Forwarder, subject="emailAddress=[redacted],CN=loghost-uf.biostat.washington.edu,OU=Biostatistics,O=University of Washington,L=Seattle,ST=Washington,C=US", subjectAltName="DNS:keller-uf, DNS:keller-uf.biostat.washington.edu, DNS:loghost-uf, DNS:loghost-uf.biostat.washington.edu", serial=10, notValidBefore=1623099653, notValidAfter=1938459653, issuer="/C=US/ST=Washington/L=Seattle/O=UW/OU=Biostatistics/CN=zwickel.biostat.washington.edu/emailAddress=bite@uw.edu", sha256-fingerprint=10:31:07:BF:21:F2:49:41:34:E4:53:7F:89:C0:CB:81:99:6E:16:00:29:3E:C4:BC:C3:88:A1:CC:92:D0:AD:32 ... 04-24-2025 14:04:00.362 -0700 WARN X509Verify [5944 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 04-24-2025 14:04:00.381 -0700 INFO CertificateData [5944 HTTPDispatch] - channel=HTTPServer, subject="O=SplunkUser,CN=SplunkServerDefaultCert", subjectAltName="", serial=9814D004673F8828, notValidBefore=1745011134, notValidAfter=1839619134, issuer="/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com", sha256-fingerprint=DC:75:CA:ED:54:2A:28:12:D4:A1:B9:DC:37:29:75:F4:9B:56:1F:A2:C7:33:BB:EB:EF:02:37:AC:6E:81:E4:CA I am not seeing anything in the log before the non-ssl line that appears to be an error, though it is a noisy log so it is quite possible I missed something. I have my working splunk configuration with functional Windows and Linux UFs that I am trying to base this work on. It does not have the non-ssl or SplunkServerDefaultCert log entries. I presume both are Bad Signs<tm>. Both my working system and this one have sslRootCAPath set in deployment-apps/fwd_to_loghost/default/outputs.conf: [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] compressed = true server = loghost.biostat.washington.edu:9997 clientCert = $SPLUNK_HOME/etc/apps/ssl_bundle/default/UF/loghost-uf-bundle.crt sslPassword = [redacted] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt sslVerifyServerCert = true neither of them [had] sslRootCAPath set anywhere else in deployment-apps. I've tried adding a deployment-apps/windowsconf_global/default/server.conf, though ConfigureSplunkforwardingtousesignedcertificates seems to say this is only needed for non-windows hosts: [sslConfig] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt but the "unknown protocol" errors and non-ssl and SplunkServerDefaultCert log entries persist. As I said, I'm not a windows admin but given the windows hosts in the working environment are fine with paths like "$SPLUNK_HOME/etc/apps/ssl_bundle/default/..." in outputs.conf and there is a reference to a clearly self-signed cert in the log I have to presume these path entries are valid and working so it should be finding both the cert and the bundle. I've looked at the output of btool server & btool outputs, comparing it with the working instance, and I don't see any obvious or glaring problems. The new server.conf entry shows up in the output of btool server list so it is being seen but not having any impact on the problem. I presume the "unknown protocol" is because the windows UF is trying to use a non-ssl connection, per the UF's log file entry. I've read (and re-read, and re-re-read) https://docs.splunk.com/Documentation/Splunk/9.4.1/Security/ConfigureSplunkforwardingtousesignedcertificates and several forum posts that seem to be about this kind of problem but so far nothing seems to have addressed it. I have to try not to break the linux UFs that are working so I have to be careful what files I touch in deployment-apps - I'm trying to limit myself to only modifying things in windowsconf_global when possible. Where should I look to try to resolve this problem? Given the Linux UFs are working fine I presume the problem is somewhere in the config for the Windows UF. Thanks in advance for any assistance.
crossposting: Splunk File Reader for "Malware_Detection_Logs" | Veeam Community Resource Hub