snix's Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

snix's Posts

I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with ge... See more...
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with getting SSL going on the indexer. I see you can also setup a certificate on the clients for authentication to the server but I want to take it one step at a time.  I have a GoDaddy cert I would like to use with the indexer and I have looked over much of the documentation on Splunk's site on all the ways you can make this configuration work but it left me confused. I can't find any mention to what to do about the public key. I see where the documentation references the server certificate and even the sslPassword in the input.conf file but no reference where to to put the key location. Is it just assumed you combine the server cert + the private key into a single pem file and if so is the order just server cert first then private key? Example:   -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY-----  
Thank you! Yeah I was hoping there was a way to just pull the filed name from the event automatically but for what I need it for right now I just need a few fields.  Yeah the events seem to vary a... See more...
Thank you! Yeah I was hoping there was a way to just pull the filed name from the event automatically but for what I need it for right now I just need a few fields.  Yeah the events seem to vary a bit on what fields are included and not included so I opted to take your second suggestion and spit them up and that worked like a charm:   | rex "AccessControlRuleAction: (?<AccessControlRuleAction>[^,]+)" | rex "SrcIP: (?<SrcIP>[^,]+)" | rex "DstIP: (?<DstIP>[^,]+)" | rex "DstPort: (?<DstPort>[^,]+)"  
We are ingesting Firepower logs via syslog using the cisco:asa TA. Many of the events I am interested in are Threat Defense events that are tied to an ID like this FTD-6-430002. When I narrow down my... See more...
We are ingesting Firepower logs via syslog using the cisco:asa TA. Many of the events I am interested in are Threat Defense events that are tied to an ID like this FTD-6-430002. When I narrow down my search to events with just that ID I find the rest of the event has plenty of info in key:value pairs but no fields have been extracted from the pairs. Sanitized example event: Mar 3 16:01:21 172.16.51.72 Mar 03 2023 22:01:21 firepower : %FTD-6-430002: EventPriority: Low, DeviceUUID: 00000-0000-0000-000000000000, InstanceID: 1, FirstPacketSecond: 2023-03-03T22:01:21Z, ConnectionID: 5000, AccessControlRuleAction: Allow, SrcIP: 100.100.100.100, DstIP: 200.200.200.200, SrcPort: 60000, DstPort: 10, Protocol: tcp Is there a regex command that can dynamically extract all the field names from something like "DstPort: 10" to Field Name of DstPort with a value of 10?  I know Cisco provides a eStreamer TA that may extract these fields but it looks very involved to setup and I already have the syslog configured. 
I am trying to just set up a basic encryption between the Universal Forwarder and indexer using the certs that come with the install. I am trying to follow the directions on this Splunk doc but am ru... See more...
I am trying to just set up a basic encryption between the Universal Forwarder and indexer using the certs that come with the install. I am trying to follow the directions on this Splunk doc but am running into issues: https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/ConfigureSplunkforwardingtousethedefaultcertificate On the inputs.conf for the indexer found under C:\Program Files\Splunk\etc\system\local on my Splunk server I added this stanza: [SSL] serverCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password requireClientCert = false Then on the outputs.config for the UF found under C:\Program Files\SplunkUniversalForwarder\etc\system\local on one of my servers I have this for the config: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = [SplunkServerNameHere]:9997 clientCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password sslVerifyServerCert = false [sslConfig] caCertFile = cacert.pem caPath = $SPLUNK_HOME\etc\auth [tcpout-server://[SplunkServerNameHere]:9997] I then reset both the Splunk server and UF and found logs were still getting ingested into the indexer with no issues except from the UF that I was setting up to use an encrypted connection. It worked with no issue prior to configuration change but its traffic was getting rejected after the UF was reset. I looked at the splunkd.log file on the Splunk server and found this error: ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=[ClientIPHere]:60167 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
I am trying to enable encryption of the traffic from all of my universal forwarders to the indexer. Looks like this involves updating the output.conf file on the forwarder (makes sense). No big de... See more...
I am trying to enable encryption of the traffic from all of my universal forwarders to the indexer. Looks like this involves updating the output.conf file on the forwarder (makes sense). No big deal but the only way I have ever configured that file is via our software deployment solution when I go to install the forwarder on a given machine. After that I never touch the file. I can use the same solution to do a simple copy and replace to each system, but was wondering if this can be done via the app deployment system built into Splunk, the same way I would configure any other config file in any deployed app? I could see why you would not want to do that through the deployment solution in case you mess up a config file and all your forwarders lose their ability to communicate back to the indexer after it updates. But, if you could do it then I just assume it might be as simple as creating a deployment app called something like "SplunkUniversalForwarder" and then dumping the config file in the local folder and it would take precedence over the local $SPLUNK_HOME/etc/system/local/outputs.conf file on the given forwarder. Would that work?
Thanks but I still don't see how it would work. I see in a return you can pull a field out of a secondary search but I am still having trouble seeing how you would combine it all. Could you provide a... See more...
Thanks but I still don't see how it would work. I see in a return you can pull a field out of a secondary search but I am still having trouble seeing how you would combine it all. Could you provide a generic search example?
I am building out a report that lists all the lockouts during a given period of time. If I look at the Windows security event ID 4740 on a machine where one of the local accounts show a lock out then... See more...
I am building out a report that lists all the lockouts during a given period of time. If I look at the Windows security event ID 4740 on a machine where one of the local accounts show a lock out then all I can see is that the account was locked out but there is no information in regards to what remote machine made the attempt. If I do a secondary search around the time of the lockout I can see that there is a number of failed logins and I am able to get a Workstation Name, Source Network Address, and the logon type. This info is usually enough to let me know why it was locked out. What I would like to do is run a main search for the lockout then when one is found run a secondary search that looks backwards on that machines logs starting from the time the event is recorded and find the last failed login event right before the lockout happened and maybe even verify if the account on the lockout event matches the account on the failed attempt then pull the Workstation Name, Source Network Address, and the logon type files from that event and append them onto the first event. This would give me an event that might look something like this once you clean up the field names: User Name, Target Computer, Source Computer, Source IP, logon type, Time User Name, Target Computer, and Time come from the first search (AKA lockout search) and Source Computer, Source IP, logon type comes from the second search (AKA last failed attempt search). Doable?
@efavreau I have not noticed that in the past. Our IIS logs getting into Splunk usually were maybe a minuet behind but always came in pretty consistently. That said I just happen to have a call wi... See more...
@efavreau I have not noticed that in the past. Our IIS logs getting into Splunk usually were maybe a minuet behind but always came in pretty consistently. That said I just happen to have a call with an outfit the provides Splunk professional support yesterday morning. They took a look at it and we tried a couple things. First we tried charging form crcSalt = <SOURCE> to initCrcLength = 400 They said they had issues in the past using crcSalt so we tried initCrcLength and set it to 400 because with all the comments and fields in the beginning of the logs I wanted it to check enough characters that it would be able to get past the comments and be able to see the time, servername, and site so we could say for sure the logs was unique. After the change I noticed the same issue where after a UF reboot you would see logs up to that point but after that it would not pull anything in new so the next thing they noticed was we were hitting our max allowed kbps on the UF on the web servers. The default is 256kbps so we bumped it up to 750kbps. We could see it was using the full 750 kbps and just left it because my meeting with them was over. After that I checked Splunk and I noticed it also stopped logging after the UF on each box had reset. Next morning I came back though and found all the logs were working as expected and were all showing pulling from all the log files from each site hosted on that server. What I think happened was that after changing over to using crcSalt or initCrcLength is that Splunk then needs to re-index all the existing IIS logs on each server again as it is using a new way to track the log files. This is an intensive process that takes a long time since each server has IIS logs that go all the way into 2018 so that is about 20+ GB of logs in total across all the web servers. I think that it would start indexing beginning from when the UF was reset and work its way back till it was finished pulling in all the old logs. Since this was going to take some time it just looked like logging to Splunk stopped as it caught up on all the back logs. This also explains why the UF on each web server was maxing out the 256kbps transfer limit as it copied over all the old logs. Then I come in the next morning to take a look and find it had caught up with all the logs and is not showing traffic coming in live from each sites logs on each server. In conclusion I think using crcSalt will probably work with no issues (and may be the better option of the two) but I can verify for sure if you use initCrcLength and set the character length long enough that can verify the log is unique. This will resolve the issue I was running into.
I have is set to auto restart after a deployment but I took your advice and ran it on a couple of the servers and I did start to see logs populate. The odd thing is that they populated back up to the... See more...
I have is set to auto restart after a deployment but I took your advice and ran it on a couple of the servers and I did start to see logs populate. The odd thing is that they populated back up to the point of the restart time but them stop ingesting new logs again.
@PaveIP I think you are on the correct path. I took your advice and searched "splunk iis logs initCrcLength problem". The first page that popped up talked about this same issue and said they found an... See more...
@PaveIP I think you are on the correct path. I took your advice and searched "splunk iis logs initCrcLength problem". The first page that popped up talked about this same issue and said they found an error in the _internal index. I checked for this error and found it myself. https://answers.splunk.com/answers/530434/indexing-issue-with-iis-logs-file-will-not-be-read.html I then continued searching and found many people recommending putting this into your inputs.conf file: crcSalt = <SOURCE> I also read in order to get it to work I needed to create a new log file on the web server so I stopped IIS and renamed the current log file to something.log.old on each site and started IIS again: https://medium.com/@anon5123/splunk-sometimes-doesnt-index-logs-entirely-e611efe55eca I could see it generated a new log file. That same site then said you needed to go here https://localhost:8000/debug/refresh and do a refresh. I then checked Splunk and found that I am not indexing my IIS logs at all. Any advice on what I might have done wrong?
I had the same thought and you are right there are so many sites and servers and with load balancers in the mix it is hard to tell what traffic is going where. So I decided to focus on just one ... See more...
I had the same thought and you are right there are so many sites and servers and with load balancers in the mix it is hard to tell what traffic is going where. So I decided to focus on just one server and wanted to see if there was traffic getting logged to each site on that server. Then I wanted to make sure if there was traffic getting logged that it was then showing up in Splunk. I did this by remoting into the server and looking at the most current log file for each site and making sure traffic was getting logged under W3SVC1 W3SVC2 W3SVC3. Once verified I then went to Splunk and searched that host for all IIS related events and verified what sources the events were logged under. This is the query I used to validate all the log files getting logged into Splunk for a specific server: index=iis host='servernamehere' | stats count by source In Splunk I could only see events from the W3SVC2 logs for the entire day when I saw events on all three sites.
We have multiple IIS web servers that all host multiple sites. Each sites logs are saved to the default location of: C:\inetpub\logs\LogFiles\W3SVC*\*.log This is the stanza I use on each server... See more...
We have multiple IIS web servers that all host multiple sites. Each sites logs are saved to the default location of: C:\inetpub\logs\LogFiles\W3SVC*\*.log This is the stanza I use on each server: [monitor://C:\inetpub\logs\LogFiles*\*.log] disabled = 0 sourcetype = iisw3c index = iis The odd thing is depending on they day and time I might see logs from W3SVC2 but nothing on W3SVC1 or 2. Then on another server I might see W3SVC2 and 3 but not 1. It just seems random on what logs it will pull from on any given server on any given day. I checked a couple of the servers to make sure logs were getting generated even though Splunk was not showing any ingested and they were. Am I missing something here like you need to make a stanza per website or something like that?
I understand your point I don't want to keep it too generic that I filter out potentially useful logs and over load the indexer. But at the same time I want be able to filer on on just the general na... See more...
I understand your point I don't want to keep it too generic that I filter out potentially useful logs and over load the indexer. But at the same time I want be able to filer on on just the general names under the part of the log that would be considered the user agent string field once the fields were extracted. That way if the vendor changes to a new user agent string or adds a new one I don't have to go in find it and then add it to filter. I can see Cloudflare doing this and already has 5 + user agent strings I will have to filter on. I know I am trying to have my cake and eat it too in this situation but just trying to find a good balance. I still have not had a chance to filter at the IIS level but I will update after I have a chance to look at it. Not a fan of modifying IIS in this situation but it has the added benefit of not filling up the web servers with useless logs as well.
Here are a couple event I pulled from my IIS logs: 2020-04-06 00:27:11 W3SVC2 <Server Host Name Here> <Destination IP Address Here> GET <Sub Directory Path> - 443 - <Source IP Address Here> HTTP/... See more...
Here are a couple event I pulled from my IIS logs: 2020-04-06 00:27:11 W3SVC2 <Server Host Name Here> <Destination IP Address Here> GET <Sub Directory Path> - 443 - <Source IP Address Here> HTTP/1.1 Mozilla/5.0+(compatible;+Cloudflare-Traffic-Manager/1.0;++https://www.cloudflare.com/traffic-manager/;+pool-id:<Removed some ID string>) - - <Website Domain Address> 200 0 0 265 219 62 - 2020-04-06 00:00:45 W3SVC2 <Server Host Name Here> <Destination IP Address Here> GET <Sub Directory Path> - 443 - <Source IP Address Here> HTTP/1.1 KEMP+1.0 - - <Website Domain Address> 200 0 0 5400 161 56 The health checks are usually between 5-10 seconds apart depending on the load balancer and how may different sites are hosted and being checked on the web site. Since we need to verify each site is up and move off of it quickly we need to constantly check but it dose look to be eating gigs of logs each day. Thanks for the Microsoft document link, I will look at doing the filtering that way and see if it ends up being a better way to filter in the end.
@PaveIP I agree with you I would like to have it more targeted. That said my original post where I reference the cs_User_Agent_ field does not actually work since I think that field is added after I ... See more...
@PaveIP I agree with you I would like to have it more targeted. That said my original post where I reference the cs_User_Agent_ field does not actually work since I think that field is added after I do the filtering. If I am wrong and the field should be there then let me know and I can try again. I could tighten the filter but I am worried that it is putting too much strain on the indexer. Is there a more efficient way to do this? Can I do the filtering on the UF before it even gets to the indexer?
I will keep an eye on it and report back if I do continue to see issues but your much simplified query I think did the trick. I have not see any red or yellow since I switched to it. Thank you!
Okay I think I have it working by adjusting the regex to this: REGEX = (?i)(\S*kemp*[^\s]+|\S*Cloudflare*[^\s]+|\S*Nessus*[^\s]+|\S*tenable*[^\s]+) More testing will need to be done to verify it... See more...
Okay I think I have it working by adjusting the regex to this: REGEX = (?i)(\S*kemp*[^\s]+|\S*Cloudflare*[^\s]+|\S*Nessus*[^\s]+|\S*tenable*[^\s]+) More testing will need to be done to verify it works as expected. Even then I think my regex may inefficient to the point it is causing issues on the indexer. I am starting to see under the "Health Status of Splunkd" area that sometimes under the TailReader-0 it will show as yellow or red then go back to green. If anyone has tips on the best way to implement this filter please let me know.
I am trying to save on space and licensing with my IIS logs. Currently the vast majority of my logs are just constant health checks from our load balancers or security tools. I would like to filter t... See more...
I am trying to save on space and licensing with my IIS logs. Currently the vast majority of my logs are just constant health checks from our load balancers or security tools. I would like to filter these out by their user agent strings before they are indexed. Currently I have user agent strings that come from KEMP, Cloudflare, Nesses, and Tenable that I would like to filter out. On the indexer I go into \SPLUNKHOME\etc\apps\Splunk_TA_microsoft-iis\local and I modified the promps.conf file and added this line to the sourcetype stanza I use for the logs: TRANSFORMS-null= setnull Also in the same folder I modified the transforms.conf file and added this stanza: [setnull] REGEX = cs_User_Agent_="(?i)(\S*kemp*[^\s]+|\S*Cloudflare*[^\s]+|\S*Nessus*[^\s]+|\S*tenable*[^\s]+)" DEST_KEY = queue FORMAT = nullQueue Should the filtering happen at the index or should I move the settings to promps.conf and transforms.conf files on the app I deploy to the UF? Maybe my regex is just not right, I could not find a good example and guessed on how to reference the field to parse. Hopefully someone can let me know if I am even close to getting it right.
After looking closer at it I did find most of the events contained a combination of multiple events into one event. Not sure why because I would think what you have would work. I don't pretend to und... See more...
After looking closer at it I did find most of the events contained a combination of multiple events into one event. Not sure why because I would think what you have would work. I don't pretend to understand much about return carriages and new lines in the little amount of programing I have to deal with but it looked good. I took some of the output from the log file and pasted it into Notpad++ and did a show of all characters and it showed CR LF at the end of each line so that looks good to me. That said I commented out the LINE_BREAKER line and replaced it with "BREAK_ONLY_BEFORE = \d\d?:\d\d:\d\d" which I found under the log4j stanza and it worked. Since I don't grasp 100% what I am doing I am sure this is not the best way to do it but it did get the results I was looking for. If someone understands what is going on and would like to explain it I am all ears. I think this will end up being a good post in general for others trying to do something similar and just needs a useful example of what it would look like.
Holy you know what... That is exactly what I am looking for. Thank you for such a great and specific example! You even built out how to pull in the time from the logs which I had no idea how to do bu... See more...
Holy you know what... That is exactly what I am looking for. Thank you for such a great and specific example! You even built out how to pull in the time from the logs which I had no idea how to do but was going to be the next part to figure out. I was able to implemented it and verify it works exactly how I wanted. Thank you!!!!