1. That's good. You should use search-time extractions as I said from the beginning. 2. And as I said before, without additional configurations indexed fields are not searchable the same way search-...
See more...
1. That's good. You should use search-time extractions as I said from the beginning. 2. And as I said before, without additional configurations indexed fields are not searchable the same way search-time fields are. It doesn't mean "transforms don't work".
Hello @Meett In the splunkd I see a copy of the error "External handler failed with code '1' and output ''. without any specific additional information. Luckily I solve the issue for this ca...
See more...
Hello @Meett In the splunkd I see a copy of the error "External handler failed with code '1' and output ''. without any specific additional information. Luckily I solve the issue for this case: It was not an Addon problem but a Google Cloud permission issue. In fact, I did not have the Viewer permission for the Projects to execute correctly the queries from Splunk. A very simple case, complicated by the fact that the Addon returns no details about the error. Bye, thanks
It's unlikely but not impossible that your particular setup triggers some bug in the software. What I would do: 1) compare pre- and post-upgrade configs to verify if anything changed 2) do a fresh...
See more...
It's unlikely but not impossible that your particular setup triggers some bug in the software. What I would do: 1) compare pre- and post-upgrade configs to verify if anything changed 2) do a fresh reinstall of 9.1 where your 9.3 wasn't working and reapply the config 3) If you have the means, try to spin up a fresh indexer with a http input and point that UF to the new indexer. If no obvious reason pops up just raise a case with Splunk support.
Log analysis needs two things. One - as @ITWhisperer already mentioned - is the logs themselves. You must have the data to analyse. You can't analyse something you don't have. Another important thin...
See more...
Log analysis needs two things. One - as @ITWhisperer already mentioned - is the logs themselves. You must have the data to analyse. You can't analyse something you don't have. Another important thing is the goal of your analysis - what you want to get from your logs. A question you want answered using the data you have. You don't just "analyse logs" for fun. You want the logs to tell you, for example - if anyone tried to log in to your network and failed. How many such attempts were made? Were someone persistent in their attempts or were there just "random" occurrences? Or you can check performance data - what connection quality your clients had. What bandwidth did they use. And so on. Of course to answer such questions you need a relevant set of data for each use case. You can't typically tell much about security from performance data and vice versa. (Sometimes anomalies in one type of data can be a hint of something happening elsewhere but that's a much more advanced topic and for now don't bother with it).
You need to start there then. This will depend on your router/modem and what capabilities you have available to you there. Essentially, you need to find a way to get your logs ingested into Splunk so...
See more...
You need to start there then. This will depend on your router/modem and what capabilities you have available to you there. Essentially, you need to find a way to get your logs ingested into Splunk so you can start your analysis.
Hi there, I am using Splunk Add-on for Symantec Endpoint Protection, according this documentation https://docs.splunk.com/Documentation/AddOns/released/SymantecEP/Configureinputs when i login Sym...
See more...
Hi there, I am using Splunk Add-on for Symantec Endpoint Protection, according this documentation https://docs.splunk.com/Documentation/AddOns/released/SymantecEP/Configureinputs when i login Symantec dashboard, it will show Endpoint Status like : Total Endpoints / Up-to-date / Out-of-date / Offline / Disabled / Host Integrity Failed. Has anyone used Symantec and solved this problem?
hi @gcusello thanks for your inputs, i have some correction in my query. in the outer query i am trying to pull the ORDERS which is Not available .I need to match the ORDERS which is Not availab...
See more...
hi @gcusello thanks for your inputs, i have some correction in my query. in the outer query i am trying to pull the ORDERS which is Not available .I need to match the ORDERS which is Not available to with the ORDERS on Sub query. Result to be displayed ORDERS & UNIQUEID . common field in two query is ORDERS Below is the query i am using index=source "status for : * | "status for : * " AND "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request for : * | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | table ORDERS UNIQUEID
Hi Rick, Instead of props.conf and transforms.conf in the HF (index-time extraction), we have moved the regex settings to the props.conf in all of our search heads (search-time extraction) manually ...
See more...
Hi Rick, Instead of props.conf and transforms.conf in the HF (index-time extraction), we have moved the regex settings to the props.conf in all of our search heads (search-time extraction) manually in the /opt/splunk/etc/system/local directory as below: props.conf [aws:elb:accesslogs] EXTRACT-aws_elb_accesslogs = ^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientPort>\S+)\s+(?P<TargetPort>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+) This is working as of now, but it is weird that the props and transforms configurations wouldn't work since the regex are the same.
In addition to the technical consideration @PickleRick points out, you should make a blunt case to your developers that this is logically impossible unless there is ever one user accessing your ent...
See more...
In addition to the technical consideration @PickleRick points out, you should make a blunt case to your developers that this is logically impossible unless there is ever one user accessing your entire Web site with credentials, or there is a strict mechanism to prevent more than one user to access your Web site during any prescribed time interval. This, and if code authentication failure is the ONLY reason 401 is returned. (HTTP 401 is for unauthorized access, not an indicator of authentication failure.) Present the above two logs to your developers, ask them what logic can they use (without Splunk) to tell you why the second event is related to the same user as the second event? If your logs contain additional identifiable information such as client IP address, there is a better chance for such correlation. But your mock data don't suggest existence of such data.
Splunkd requires TLS client usage if the usage is specifed. (Been there several times - customer used mutual auth and their CA issued wrong usage certs for UFs). I also don't think I've seen a fairl...
See more...
Splunkd requires TLS client usage if the usage is specifed. (Been there several times - customer used mutual auth and their CA issued wrong usage certs for UFs). I also don't think I've seen a fairly modern CA which doesn't push usages by default in their policies. So it's a volenti non fit iniuria case when someone issues such crappy cert.
Ha. I do think splunkd etc. should require it when acting as a server, especially given that it requires it when acting as a client! You're right about lazy CA policies, though.