All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am trying to change the indexer configuration from one cluster master to another but in the process of this change the indexer never starts. The web service log looks like below        ... See more...
Hi, I am trying to change the indexer configuration from one cluster master to another but in the process of this change the indexer never starts. The web service log looks like below        bash$ tail -f var/log/splunk/web_service.log 2024-11-01 16:26:18,141 INFO [6724f3196d7f1cd30e7350] _cplogging:216 - [01/Nov/2024:16:26:18] ENGINE Bus EXITED 2024-11-01 16:26:18,141 INFO [6724f3196d7f1cd30e7350] root:168 - ENGINE: Bus EXITED 2024-11-01 16:38:48,635 INFO [6724f608607f04aeca7810] __init__:174 - Using default logging config file: /data/apps/SPLUNK_INDEXER_1/splunk/etc/log.cfg 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.controllers level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.controllers.proxy level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.appserver.lib level=WARN 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.pdfgen level=INFO 2024-11-01 16:38:48,636 INFO [6724f608607f04aeca7810] __init__:212 - Setting logger=splunk.archiver_restoration level=INFO       Now I have even removed the clustering configuration from the server.conf but still the same issue with the Splunk instance. Any one else face the same issue?   Regards, Pravin
What should be the configuration if we only need to monitor ESXi hosts but not VMs via this extension. I believe because of vmware technicalities performance metric values like CPU,Memory, Disk Utili... See more...
What should be the configuration if we only need to monitor ESXi hosts but not VMs via this extension. I believe because of vmware technicalities performance metric values like CPU,Memory, Disk Utilizations values differs when monitored as VM via vcenter discovery vs when it is monitored as server
You're using TRANSFORMS which mean you're defining indexed fields (which you should generally avoid) without defining those fields as indexed in fields.conf. You should rather define them as search-... See more...
You're using TRANSFORMS which mean you're defining indexed fields (which you should generally avoid) without defining those fields as indexed in fields.conf. You should rather define them as search-time extractions and move the definitions to SH layer.
Thank you @ITWhisperer  for quick solution.   Its working for me and doing some more tweaks in it.
@gcuselloYou're confusing us a bit here Domain Controllers have their own logs. They reflect what's going on on those DCs. So they will contain the information about the domain activities but the... See more...
@gcuselloYou're confusing us a bit here Domain Controllers have their own logs. They reflect what's going on on those DCs. So they will contain the information about the domain activities but they will not contain the information about local activities on the workstations. This distinction is important because if a user A tries to access a file share \\B\C$ logging in from workstation D, you will see domain Security events from Kerberos activity both from initial login to D as well as from B but you will not see whether - for example - if user A was actually granted access to the share \\B\C$ because he might have not simply been granted permissions to the share. It has nothing to do with the authentication process which involves the DC. Authorization here is a local thing and logs (I think you have to explicitly enable access auditing BTW) will not be available on the DC because logs by default are not "forwarded" anywhere.
Hi all, We have ingested some logs using a heavy forwarder as below in /opt/splunk/etc/apps/test_inputs/local/: inputs.conf [monitor:///opt/splunk/test/test.log] index=test sourcetype=aws:elb:a... See more...
Hi all, We have ingested some logs using a heavy forwarder as below in /opt/splunk/etc/apps/test_inputs/local/: inputs.conf [monitor:///opt/splunk/test/test.log] index=test sourcetype=aws:elb:accesslogs disabled=0 start_from=oldest _meta = splunk_orig_fwd::splunkfwd_hostname Props.conf [aws:elb:accesslogs] TRANSFORMS-aws_elb_accesslogs = aws_elb_accesslogs_extract_all_fields Transforms.conf [aws_elb_accesslogs_extract_all_fields] REGEX = ^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientPort>\S+)\s+(?P<TargetPort>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+) Before we applied the props and transforms.conf, we have used the rex function to test the logs in the search head as below and the fields appeared when searched: index=test sourcetype=aws:elb:accesslogs | rex field=_raw "^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientIP>\S+)\s+(?P<TargetIP>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+)" However, when we ingested the logs as usual, the fields weren't extracted as per the rex during the search, is there anything missing or why the regex isn't being applied to the logs?  Appreciate if anyone has any advice on this. Thank you in advance.
Hi @Ananya_23 , you have two choices: display the raw events (it's usually the first choice in the display options) or use _raw as field in a table visualization. Ciao. Giuseppe
Hi @hazem , taking the logs from the DC, you have all the events from all the clients and you can have Security, System and Application logs. Obviously you don't have local events e.g. local users ... See more...
Hi @hazem , taking the logs from the DC, you have all the events from all the clients and you can have Security, System and Application logs. Obviously you don't have local events e.g. local users accesses. Ciao. Giuseppe
That's true. In fact if the "header" part is constant except for the changing timestamp of course I'd simply SEDCMD it away. Then you'd have a pure json payload, a proper timestamp and no unnecessary... See more...
That's true. In fact if the "header" part is constant except for the changing timestamp of course I'd simply SEDCMD it away. Then you'd have a pure json payload, a proper timestamp and no unnecessary "header" bloat in your index.
Hi I have a unique request where I want to display the Event Actions -- > Show Source link to be displayed on the dashboard instead of drilling down by opening the query -- > event and then ---> show... See more...
Hi I have a unique request where I want to display the Event Actions -- > Show Source link to be displayed on the dashboard instead of drilling down by opening the query -- > event and then ---> show source.
@PickleRick , you are right about the line breaker, I used a capturing group to keep only JSON messages. 
Hi Splunkers,, We have issue about our Telegram Alert. We set alert send every 5 minutes, but the happened is alert send only one or two time per day. We fill and telnet proxy server confirm connect... See more...
Hi Splunkers,, We have issue about our Telegram Alert. We set alert send every 5 minutes, but the happened is alert send only one or two time per day. We fill and telnet proxy server confirm connected. telnet xxx.xxx.co.id 8080 Trying xx.xx.xx.xx... Connected to xxx.xxx.co.id.   We also check on splunkd.log there's SSL error. Below the error log: 11-04-2024 10:30:07.063 +0700 ERROR sendmodalert [2216772 AlertNotifierWorker-0] - action=telegram STDERR - WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)'))': /bot7980126779:AAGIDUqqXlAEdfeLE7_OcOiqtJCIOzVljXc/sendMessage?chat_id=-4525666353&text=%3Cb%3ESPLUNK+ALERT+MESSAGE%0A------------------------------%3C%2Fb%3E%0A%3Cb%3EAlert+Name%3C%2Fb%3E%3A+test_telegram+%0A%3Cb%3ESEVERITY%3C%2Fb%3E%3A+High+%0A%3Cb%3EMESSAGE%3C%2Fb%3E%3A+R2.BRN.PE-MOBILE.2%3B56+%0A%3Cb%3EResults+Link%3C%2Fb%3E%3A+https%3A%2F%2Fdcosplunksearchhead%3A8000%2Fapp%2Falert_telegram%2Fsearch%3Fq%3D%257Cloadjob%2520scheduler__usercomm_YWxlcnRfdGVsZWdyYW0__RMD5486a20947b8a80a2_at_1730691000_1982%2520%257C%2520head%25201%2520%257C%2520tail%25201%26earliest%3D0%26latest%3Dnow&parse_mode=HTML 11-04-2024 10:30:07.363 +0700 INFO sendmodalert [2216772 AlertNotifierWorker-0] - action=telegram - Alert action script completed in duration=6326 ms with exit code=5 11-04-2024 10:30:07.363 +0700 WARN sendmodalert [2216772 AlertNotifierWorker-0] - action=telegram - Alert action script returned error code=5 11-04-2024 10:30:07.363 +0700 ERROR sendmodalert [2216772 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script returned error code 5. Please help us to solve this issue. Thanks.. 
@catta99  Please check below code function test() { console.log("in test"); } require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function($, mvc) { consol... See more...
@catta99  Please check below code function test() { console.log("in test"); } require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function($, mvc) { console.log("hello 2"); function test1() { console.log("in test1"); } $(document).ready(function () { $("#back").click(function(){ alert("button"); var param = $(this).data("param"); console.log(param); console.log("click"); history.back(); }); $("#backtest").click(function(){ test(); alert("Back"); }); $("#back1test").click(function(){ test1(); alert("Back1"); }); }) });   Splunk Web Framework works little different from regular or Native Jquery/Javascript behaviour specially on HTML element. So when you write <button onclick="test()">Back</button> <button onclick="test1()">Back1</button> It may not work bcoz when you run the dashboard, the Dashboard XML is rendering into complied version of JS, CSS and HTML. After all this process it will allow developer to add custom JS, CSS and even custom components through custom JS like button_test.js. So all customisation and the element event handleling will be define in this JS only. Suggestion: Using below code is not best practive. Just code what you want instead of this.history.back(); I hope this will help you. Let me know if you need further help on this. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.  
but the restart process already done and it still show same value
This is correct, however, Splunk will log a message every time it copies a timestamp from a previous event.  These messages will affect the metrics on the Data Quality dashboard in the Monitoring Con... See more...
This is correct, however, Splunk will log a message every time it copies a timestamp from a previous event.  These messages will affect the metrics on the Data Quality dashboard in the Monitoring Console/Cloud Monitoring Console.
Yoj can try this as your line breaker ([\r\n](?=\d{4}-\d{2}-\d{2}T)|(?<=ContentGenerator) ) See https://regex101.com/r/gw5YHj/1
Ok, right you are. The docs are not very good around this - indeed if a timestamp cannot be parsed it will be assumed to be from the previous event, but in your case that would mean you'd have to mak... See more...
Ok, right you are. The docs are not very good around this - indeed if a timestamp cannot be parsed it will be assumed to be from the previous event, but in your case that would mean you'd have to make sure whole blob gets forwarded to a single downstream (idx or HF).
@PickleRick if there is no timestamp within a log entry then other event which has timestamp will be added to it. 
What do you mean by "timestamp will be taken"? Timestamp is either parsed out of the event or assumes to be the time of ingestion (or can be explicitly provided for HEC input).
@PickleRick . Timestamp will be taken from other event that wont be an issue. That what the requirement and need help on writing regex to match the pattern.