All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please update your subject to be more descriptive of the question you would like help with. We are volunteers here and would prefer to spend our time working on issues we can help with, so by being m... See more...
Please update your subject to be more descriptive of the question you would like help with. We are volunteers here and would prefer to spend our time working on issues we can help with, so by being more descriptive will allow us to focus our time and energy, and potentially get you a quicker and more accurate response: win/win!
Splunk claims this was fixed in 9.2.2, and is listed in the "fixed issues" for this version.  I wish I could confirm, but as of 9.2.2 my DS struggles to render any page in "Forwarder Management".   S... See more...
Splunk claims this was fixed in 9.2.2, and is listed in the "fixed issues" for this version.  I wish I could confirm, but as of 9.2.2 my DS struggles to render any page in "Forwarder Management".   Support is struggling to determine cause for 3+ months now
We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the ... See more...
We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the file to ingest it. I wanted to check with the TIBCO people if there was a way to change the permissions with which the File Watcher process opened the file (ensure it had the FILE_SHARE_READ) but they suggested a simpler and just-as-effective solution. TIBCO will create the files, initially, as ".tmp" files, so they won't match the name pattern on the monitor stanza. When the process of writing to disk has completed, TIBCO will drop the ".tmp" so the files match the monitor stanza. That way, Splunk will only try to ingest the files that have been written into the disk and, therefore, are not locked.
Dear Splunkers, I would like to ask your support in order to adapt my search query to return results if downtime taking specific time window e.g. 3 consecutive days. May search query is following:... See more...
Dear Splunkers, I would like to ask your support in order to adapt my search query to return results if downtime taking specific time window e.g. 3 consecutive days. May search query is following: | table _time, status, component_hostname, uptime | sort by _time asc | streamstats last(status) AS status by component_hostname | sort by _time asc | reverse | delta uptime AS Duration | reverse | eval Duration=abs(round(Duration/60,4)) | search uptime=0 Like this I was able identify components with uptime=0.  Now I would like to extend my query to display result when specific component downtime=0 for several consecutive days e.g. 3 or 2 days. Thank you
I @splunklearner , you could (not mandatory) put props.conf and transforms.conf on UFs and I hint to do this, also because these files are usually in standard add-ons. Then you have to put them on ... See more...
I @splunklearner , you could (not mandatory) put props.conf and transforms.conf on UFs and I hint to do this, also because these files are usually in standard add-ons. Then you have to put them on Search Heads and on Indexers. Are you speaking about F5 Waf Security add-on I suppose, did you read the documentation at https://splunkbase.splunk.com/app/2873 ? Ciao. Giuseppe
I am deployed to new project in splunk. We have logs coming from F5 WAF devices sent to our syslog server. Then we will install UF on our syslog server and forward it to our indexer. Syslog --- UF -... See more...
I am deployed to new project in splunk. We have logs coming from F5 WAF devices sent to our syslog server. Then we will install UF on our syslog server and forward it to our indexer. Syslog --- UF --- Indexer And we have few on premise servers and few are there in AWS EC2 instances. Can someone explain me more indepth about this project? There is no HF in our env as of now. So where can we write props.conf and transforms.conf? In indexer or UF? if we write in indexer, will it work because indexing is already done right? Will props.conf work before indexing the data in indexer?
Hi @PickleRick , you said in a perfect way what I tried to explain: on DC there are the connection events (e.g. 4524 or 4634 etc...) but not the local events fron the clients. For this reason I hin... See more...
Hi @PickleRick , you said in a perfect way what I tried to explain: on DC there are the connection events (e.g. 4524 or 4634 etc...) but not the local events fron the clients. For this reason I hinted to install the UF also on Clients and not only on DC. Ciao and thanks for the details. Giuseppe
Thanks - this is defnitely helping a lot. I would love to join the tables in the results. And what I also noticed is that the description isn't always exactly "Leaver Request for" that is why I added... See more...
Thanks - this is defnitely helping a lot. I would love to join the tables in the results. And what I also noticed is that the description isn't always exactly "Leaver Request for" that is why I added affect_dest="STL Leaver" which checks just for leaver tickets identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration   Start here and see what you can find, otherwise please provide your props.co... See more...
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration   Start here and see what you can find, otherwise please provide your props.conf configuration if possible so we can actually see what is being attempted vs an example of the actual output.  A sample of the log helps when deciphering how your existing props.conf is interacting with the data.
My team has setup with correlation_search_1, service1 creating notable events that have the notable event aggregation policy - policy1.  Now I made additions, correlation_search2, service2 and pol... See more...
My team has setup with correlation_search_1, service1 creating notable events that have the notable event aggregation policy - policy1.  Now I made additions, correlation_search2, service2 and policy2. But when I went to the episodes review window I find out that notable event episodes from search2 are still using policy1, how do I get these set of episodes to follow policy2 without disturbing the previous setup following policy1, I cant find any setting that allows me to do so, please help where I can find this if it exists.
We are trying to onboard data from F5 WAF devices to our splunk. F5 team sending it by key value pairs. And one of them is "headers:xxxxxxxxx" (nearly 40 words). When data is getting  onboarded and w... See more...
We are trying to onboard data from F5 WAF devices to our splunk. F5 team sending it by key value pairs. And one of them is "headers:xxxxxxxxx" (nearly 40 words). When data is getting  onboarded and we are checking in splunk web, below the table format headers field is not capturing correctly. It is giving some other value. Same with other field where its value is getting truncated. Please help me in this case.
Web service should be disabled on indexers so it's not unusual. Check splunkd.log.
Hi, I am trying to change the indexer configuration from one cluster master to another but in the process of this change the indexer never starts. The web service log looks like below        ... See more...
Hi, I am trying to change the indexer configuration from one cluster master to another but in the process of this change the indexer never starts. The web service log looks like below        bash$ tail -f var/log/splunk/web_service.log 2024-11-01 16:26:18,141 INFO [6724f3196d7f1cd30e7350] _cplogging:216 - [01/Nov/2024:16:26:18] ENGINE Bus EXITED 2024-11-01 16:26:18,141 INFO [6724f3196d7f1cd30e7350] root:168 - ENGINE: Bus EXITED 2024-11-01 16:38:48,635 INFO [6724f608607f04aeca7810] __init__:174 - Using default logging config file: /data/apps/SPLUNK_INDEXER_1/splunk/etc/log.cfg 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.controllers level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.controllers.proxy level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.lib level=WARN 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.pdfgen level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.archiver_restoration level=INFO       Now I have even removed the clustering configuration from the server.conf but still the same issue with the Splunk instance. Any one else face the same issue?   Regards, Pravin
What should be the configuration if we only need to monitor ESXi hosts but not VMs via this extension. I believe because of vmware technicalities performance metric values like CPU,Memory, Disk Utili... See more...
What should be the configuration if we only need to monitor ESXi hosts but not VMs via this extension. I believe because of vmware technicalities performance metric values like CPU,Memory, Disk Utilizations values differs when monitored as VM via vcenter discovery vs when it is monitored as server
You're using TRANSFORMS which mean you're defining indexed fields (which you should generally avoid) without defining those fields as indexed in fields.conf. You should rather define them as search-... See more...
You're using TRANSFORMS which mean you're defining indexed fields (which you should generally avoid) without defining those fields as indexed in fields.conf. You should rather define them as search-time extractions and move the definitions to SH layer.
Thank you @ITWhisperer  for quick solution.   Its working for me and doing some more tweaks in it.
@gcuselloYou're confusing us a bit here Domain Controllers have their own logs. They reflect what's going on on those DCs. So they will contain the information about the domain activities but the... See more...
@gcuselloYou're confusing us a bit here Domain Controllers have their own logs. They reflect what's going on on those DCs. So they will contain the information about the domain activities but they will not contain the information about local activities on the workstations. This distinction is important because if a user A tries to access a file share \\B\C$ logging in from workstation D, you will see domain Security events from Kerberos activity both from initial login to D as well as from B but you will not see whether - for example - if user A was actually granted access to the share \\B\C$ because he might have not simply been granted permissions to the share. It has nothing to do with the authentication process which involves the DC. Authorization here is a local thing and logs (I think you have to explicitly enable access auditing BTW) will not be available on the DC because logs by default are not "forwarded" anywhere.
Hi all, We have ingested some logs using a heavy forwarder as below in /opt/splunk/etc/apps/test_inputs/local/: inputs.conf [monitor:///opt/splunk/test/test.log] index=test sourcetype=aws:elb:a... See more...
Hi all, We have ingested some logs using a heavy forwarder as below in /opt/splunk/etc/apps/test_inputs/local/: inputs.conf [monitor:///opt/splunk/test/test.log] index=test sourcetype=aws:elb:accesslogs disabled=0 start_from=oldest _meta = splunk_orig_fwd::splunkfwd_hostname Props.conf [aws:elb:accesslogs] TRANSFORMS-aws_elb_accesslogs = aws_elb_accesslogs_extract_all_fields Transforms.conf [aws_elb_accesslogs_extract_all_fields] REGEX = ^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientPort>\S+)\s+(?P<TargetPort>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+) Before we applied the props and transforms.conf, we have used the rex function to test the logs in the search head as below and the fields appeared when searched: index=test sourcetype=aws:elb:accesslogs | rex field=_raw "^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientIP>\S+)\s+(?P<TargetIP>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+)" However, when we ingested the logs as usual, the fields weren't extracted as per the rex during the search, is there anything missing or why the regex isn't being applied to the logs?  Appreciate if anyone has any advice on this. Thank you in advance.
Hi @Ananya_23 , you have two choices: display the raw events (it's usually the first choice in the display options) or use _raw as field in a table visualization. Ciao. Giuseppe
Hi @hazem , taking the logs from the DC, you have all the events from all the clients and you can have Security, System and Application logs. Obviously you don't have local events e.g. local users ... See more...
Hi @hazem , taking the logs from the DC, you have all the events from all the clients and you can have Security, System and Application logs. Obviously you don't have local events e.g. local users accesses. Ciao. Giuseppe