All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Mind you that those are not directly Splunk-related things, it's more like my personal outlook based on 25 years of admin experience. 1. For me, you're doing too many things. I understand your a... See more...
OK. Mind you that those are not directly Splunk-related things, it's more like my personal outlook based on 25 years of admin experience. 1. For me, you're doing too many things. I understand your approach, but I prefer the KISS approach when doing shell scripts. For more complex things I'd go python. But that's my personal taste. I don't like overly complicated bash scripts because they tend to get messy quickly. To be honest, I would do it completely another way around - do a single simple script to manage single frozen index storage with given parameters (size/time) and possibly add another "managing" script spawning the single script for each index independently. 2. I don't see the point in generating random PROCESS_ID. And even less so - using an external dependency of openssl to generate the value of this variable. 3. You are hardcoding many paths - LOG_FILE, CONFIG_FILE, FROZEN_PATH... It might be ok if you're only writing a one-off script for internal uses. When doing a portable solution it's much more user-friendly to make it configurable. The easiest way would be to externalize those definitions to another file and include from that file using the dot (source) command. Bonus - you can use the same config file in both scripts. Presently you have to configure both scripts separately. 4. Chmod another script so you can run it... that's not nice. It should be in installation instructions. 5. I don't like the idea of a script to create the service file. Just provide a service file template with the instructions to customize it if needed. (I would probably do it with cron instead of a service but that's me - I'm old). 6. IMHO such script manipulating relatively sensitive data should use a lock file to prevent it from being run multiple times in parallel. 7. The mechanics of deleting frozen buckets is highly suboptimal. You're spawning several finds and du after removing each file. That's a lot of unnecessary disk scanning. Also - why removing files from the bucket directory and after that removing an empty directory? 8. To make the script consistent with how Splunk handles buckets you should not use ctime or mtime but rather take the timestamps from the bucket boundaries. (they might result in the same order since probably buckets will be frozen in the same order they should roll out from frozen but - especially if you're using shared storage for frozen across multiple cluster nodes and do deduplication - it's not granted). 9. Sorry to say that but it shows that it was written with ChatGPT - there are some design choices which are inconsistent (like timestamp manipulation and sometimes doing arithmetics using built-in bash functionality whereas other times spawning bc). So again - I do appreciate the effort. It's just that I would either do it completely differently (which might be simply my personal taste) or - if it was to be a quick and dirty hack - I would simply use tmpreaper (if your distro provides it) or do find /frozen_path -ctime +Xd -delete (yes, it doesn't account for size limits, but is quick and reliable) If you want to use size limits, just list directory sizes, sort by date, sum them up until you hit the limit, delete the rest. Et voila. Honestly, don't overthink it.
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallo... See more...
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallocated] [Time:29-08@17:53:03.562] [60569220]17:53:05.158 10.82.10.245 local3.notice [S=2952576] [SID=d57afa:30:1773434] (N 71121556) RtxMngr::Transmit 1 OPTIONS Rtx Left: 0 Dest: 211.237.70.18:5060, TU: AcSIPDialog(#28)(N 71121557) SIPTransaction(#471)::SendMsgBuffer - Resending last message[Time:29-08@17:53:05.158] [60569221] 17:53:05.654 10.82.10.245 local3.notice [S=2952577] [SID=d57afa:30:1773434] (N 71121558) RtxMngr::Dispatch - Retransmission of message 1 OPTIONS was ended. Terminating transaction... [Time:29-08@17:53:05.654] [60569222]17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223]17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245
I also have a lookup which is being updated but the user is n/a. It's a csv lookup. I cannot find any relevant occurrences of outputlooup before the update event. What other ways than using outputlo... See more...
I also have a lookup which is being updated but the user is n/a. It's a csv lookup. I cannot find any relevant occurrences of outputlooup before the update event. What other ways than using outputlookup could there be which resulted in the lookup being updated?
I also encountered exactly the same problem on my search head cluster. Now I'm on version 9.1.5 and still having this issue.
Hi @arielpconsolaci , I think you just need to replace ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] with ['api/SplunkVisualizationBase', 'api/SplunkVisualizationUtils'] in b... See more...
Hi @arielpconsolaci , I think you just need to replace ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] with ['api/SplunkVisualizationBase', 'api/SplunkVisualizationUtils'] in both visualization_src.js and webpack.config.js
Please share your configuration as it is probably something amiss there. Also, please share your raw event (anonymised of course), preferably in a code block so we can see all the spacing, so we can ... See more...
Please share your configuration as it is probably something amiss there. Also, please share your raw event (anonymised of course), preferably in a code block so we can see all the spacing, so we can figure out what needs changing in the configuration.
In Cluster Master : I find the below search  splunk btool server list --debug | grep -i local/apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf [clustering] /apps/splunk/spl... See more...
In Cluster Master : I find the below search  splunk btool server list --debug | grep -i local/apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf [clustering] /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf available_sites = site1 /apps/splunk/splunk/etc/system/local/server.conf maintenance_mode = false /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf master_uri = clustermaster:one /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf mode = master /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf multisite = true /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf pass4SymmKey = ************** /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf replication_factor = 2 /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf search_factor = 1 /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf site_replication_factor = origin:1, total:2 /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf site_search_factor = origin:1, total:2 /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf [clustermaster:one] /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf master_uri = https:webaddress:8089 /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf multisite = true /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf pass4SymmKey = ***************** /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf [general] /apps/splunk/splunk/etc/system/local/server.conf pass4SymmKey = **************** /apps/splunk/splunk/etc/system/local/server.conf serverName = webaddress /apps/splunk/splunk/etc/apps/100_gnw_cluster_master_base/local/server.conf site = site1 /apps/splunk/splunk/etc/system/local/server.conf [kvstore] /apps/splunk/splunk/etc/apps/100_gnw_license_master/local/server.conf [license] /apps/splunk/splunk/etc/apps/100_gnw_license_master/local/server.conf master_uri = https:webaddress:8089 /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_download-trial] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_download-trial /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = download-trial /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_forwarder] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_forwarder /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = forwarder /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_free] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_free /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = free /apps/splunk/splunk/etc/system/default/server.conf alert_store = local /apps/splunk/splunk/etc/system/default/server.conf suppression_store = local /apps/splunk/splunk/etc/system/default/server.conf conf_replication_summary.includelist.refine.local = (system|(apps/*)|users(/_reserved)?/*/*)/(local/...|metadata/local.meta) /apps/splunk/splunk/etc/system/local/server.conf [sslConfig] /apps/splunk/splunk/etc/system/local/server.conf sslPassword = ***********************   In Deployment Server : I find the below Search /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf [clustering] /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf master_uri = clustermaster:one /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf mode = searchhead /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf [clustermaster:one] /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf master_uri = https://webaddress:8089 /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf multisite = true /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf pass4SymmKey = ************************* /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf [general] /apps/splunk/splunk/etc/system/local/server.conf pass4SymmKey = ************************* /apps/splunk/splunk/etc/system/local/server.conf serverName = webaddress /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf site = site1 /apps/splunk/splunk/etc/system/local/server.conf [kvstore] /apps/splunk/splunk/etc/system/local/server.conf [license] /apps/splunk/splunk/etc/system/local/server.conf master_uri = https://webaddress:8089 /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_download-trial] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_download-trial /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = download-trial /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_forwarder] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_forwarder /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = forwarder /apps/splunk/splunk/etc/system/local/server.conf [lmpool:auto_generated_pool_free] /apps/splunk/splunk/etc/system/local/server.conf description = auto_generated_pool_free /apps/splunk/splunk/etc/system/local/server.conf quota = MAX /apps/splunk/splunk/etc/system/local/server.conf slaves = * /apps/splunk/splunk/etc/system/local/server.conf stack_id = free /apps/splunk/splunk/etc/system/default/server.conf alert_store = local /apps/splunk/splunk/etc/system/default/server.conf suppression_store = local /apps/splunk/splunk/etc/system/default/server.conf conf_replication_summary.includelist.refine.local = (system|(apps/*)|users(/_reserved)?/*/*)/(local/...|metadata/local.meta) /apps/splunk/splunk/etc/system/local/server.conf [sslConfig] /apps/splunk/splunk/etc/system/local/server.conf sslPassword = *************************
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/interme... See more...
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,413 DEBUG Starting new HTTP connection (1): x.x.x.x:80 2024-09-03 08:16:14,416 DEBUG http://x.x.x.x:80 "GET /opc/v2/instance/region HTTP/1.1" 200 14 2024-09-03 08:16:14,416 DEBUG Unknown regionId 'eu-frankfurt-2', will assume it's in Realm OC1 2024-09-03 08:16:14,636 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/cert.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,646 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/key.pem HTTP/1.1" 200 1675 2024-09-03 08:16:14,692 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,695 DEBUG Starting new HTTPS connection (1): auth.eu-frankfurt-2.oraclecloud.com:443 Thank you! NagyG
I have events from Trellix Hx appliance and i need to adjust _time of the log events   because it coming as 9/3/20 and we are on 9/3/2024  how can this be changeable.   thanks
Hi @Meett    I did update the SSL certificate Splunk is using which is basically the combination of server cert + intermediate cert + root CA cert.   But still same errors.  
I think I got the attention, because it's on the top on the list. But why should I create another duplicate question? This one describes exactly what I need, and it's still not resolved. Also, guide... See more...
I think I got the attention, because it's on the top on the list. But why should I create another duplicate question? This one describes exactly what I need, and it's still not resolved. Also, guidelines say: "If no one else has asked your question, navigate to https://community.splunk.com  and click Ask a Question, next to the search bar."
Hi @cbreitenstrom Which Splunk and Add-on version are you using? there can be multiple reason for 500 ERRORs in UI.
ok for me. i just put this line into my js: mvc.Components.get("default").unset("myToken"); thanks a lot.
hi @PaulPanther, after doing so as you suggested, I am trying to read the JSONResultsReader object like this- for item in reader:     if(isinstance(item, dict)):         for key in item:        ... See more...
hi @PaulPanther, after doing so as you suggested, I am trying to read the JSONResultsReader object like this- for item in reader:     if(isinstance(item, dict)):         for key in item:             if(key == '<...>'):                 A = str(item[key])                 print('A is :',A)   The above code was working till yesterday. Now it does not enter the 1st for loop anymore.
Find the solution, host work as an HF. As my data is cooked once so it takes the parsing configuration of this HF, i need to create an HF seperately for this kind of host  
Hello, I am currently working in a SOC, and I want to test rules in Splunk ES using the BOTSv2 dataset. How can I configure all the rules for it?
I'm attempting to call a REST API with a button click and display its JSON response on the dashboard. I'm using the following code as a reference. However, I'm not getting any results, and I can see ... See more...
I'm attempting to call a REST API with a button click and display its JSON response on the dashboard. I'm using the following code as a reference. However, I'm not getting any results, and I can see this message in the developer tools console tab: "Error handling response: TypeError: Cannot read properties of undefined (reading 'mmrConsoleLoggingEnabled')"
Hey @kvm ,  Not an Dynatrace SSL, but from ERRORs you added it looks like at network level its causing some issue, if you have CACert chain configured at Splunk level then you should add same Cert... See more...
Hey @kvm ,  Not an Dynatrace SSL, but from ERRORs you added it looks like at network level its causing some issue, if you have CACert chain configured at Splunk level then you should add same Cert to locations i mentioned.
Hi @Meett ,   Can you try to add SSL CA Chain to below location  - By this do you mean Dynatrace SSL certificate to update in the Splunk Add-on ? Please confirm.
Sorry its not complete. Can you help me with complete query to achieve above result. First need to get Final status then display the final status count based on dropdown selection like department, Lo... See more...
Sorry its not complete. Can you help me with complete query to achieve above result. First need to get Final status then display the final status count based on dropdown selection like department, Location, Company.    I have given the sample.csv raw data. Can you give me single value count query to  get the final status count for servers and then again display the the count based on department, location or company dropdown selection.