All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@vksplunk1  The KV store isn’t very reliable, so it's best to back it up regularly. 1. Some apps store their lookups in the kvstore. (collections.conf) 2. Some apps store all their configu... See more...
@vksplunk1  The KV store isn’t very reliable, so it's best to back it up regularly. 1. Some apps store their lookups in the kvstore. (collections.conf) 2. Some apps store all their configuration in the kvstore (ITSI, but they also do daily backups. For Splunk itself 1. It sometimes uses the kvstore to track which summary indexing time range was done. It's wise to back up your KV store regularly since it's vulnerable to data loss. If it gets corrupted, deleted, or runs into issues during an upgrade or restart, you could lose valuable data. Keeping backups helps you recover your data quickly if anything goes wrong. https://community.splunk.com/t5/Knowledge-Management/Is-there-any-way-to-retrieve-kv-store-that-was-accidentally/m-p/408788 
@spy_jrCheck this community link for more details:- https://community.splunk.com/t5/Alerting/Error-Code-3/m-p/689100/highlight/true
@spy_jr  This usually happens when there are 0 results from the preceding search. If the results are more than 0 then you'll not see this error. So its safe to ignore this.
Get rid of that dedup host.  You will see some events with error_msg, some without.  I cannot decipher what that dedup is supposed to accomplish, or what real problem you are trying to solve.  So, I ... See more...
Get rid of that dedup host.  You will see some events with error_msg, some without.  I cannot decipher what that dedup is supposed to accomplish, or what real problem you are trying to solve.  So, I cannot suggest an alternative.  But if you have that dedup and if for each host the last event is NOT a failure or disconnect, you will get no error_msg.  Maybe you mean this? index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "restart failed"), "restart failed", match(_raw, "Failed to start connector"), "failed to start connector") | search error_msg = * | dedup host | table host connName error_msg  
As many here will tell you, map is probably the wrong answer to the problem you are trying to solve.  That aside, you need to clarify what "not working", "problem ... with 'map' command" mean, how do... See more...
As many here will tell you, map is probably the wrong answer to the problem you are trying to solve.  That aside, you need to clarify what "not working", "problem ... with 'map' command" mean, how does "things" grey out, etc. (In which window, for example, do "things" grey out?) It is atypical to use a makeresults subsearch to produce search terms inside a map command.  But I think I get that return search should be interpreted verbatim by the compiler.   My wild guess is that the search inside the map command does not return any result.  Is this the case?  It seems that the map command can be simplified to | map maxsearches=100 search="search $time_token$ $index_token$ $rule$ | eval rule_found=\"$rule$\", rule_id=\"$id$\"" If this is semantically correct, try turning substitute select values of time_token, index_token and rule from earlier search, and perform the mapped search manually for diagnosis.  Hope this helps. (Still, there might be more elegant and less error-prone method than using map command to solve the problem you are attacking.)
@shashank9  Can you check this on the heavy forwarder?  netstat -tulnp | grep 9997 OR ss -tulnp | grep 9997 Check the metrics.log if any queues are getting blocked. tail -f /opt/splunk/var/lo... See more...
@shashank9  Can you check this on the heavy forwarder?  netstat -tulnp | grep 9997 OR ss -tulnp | grep 9997 Check the metrics.log if any queues are getting blocked. tail -f /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true" Verify that outputs.conf the HF is correctly configured. Ensure there are no typos in the IP addresses or port numbers. File permission issues could be a possible reason why Splunk HF is not reading test.log. If the Heavy Forwarder (HF) process does not have the required permissions to read the file, it won't be able to forward logs to the indexers.
Hi Everyone, I've installed and configured a Splunk Heavy Forwarder on an EC2 instance in AWS and configured two Splunk Indexers on EC2 instances in AWS. I created a test.log file on my HF with samp... See more...
Hi Everyone, I've installed and configured a Splunk Heavy Forwarder on an EC2 instance in AWS and configured two Splunk Indexers on EC2 instances in AWS. I created a test.log file on my HF with sample log events to forward them to my Splunk indexers. I'm trying to forward the logs/events with keyword "success" to indexer_1 and forward logs/events with keyword "error" to indexer_2. But, for some reason the logs/events from the HF are not visible in both Indexers. Just for the context, I have installed and configured a UF on another EC2 Instance in AWS and sending data to Indexer_1 and I can see the data successfully forwarded with no issues. Below are the .conf files and setup on my HF and two indexers. HF: inputs.conf: [monitor:///opt/splunk/var/log/splunk/test.log] disabled = false sourcetype = test outputs.conf: [tcpout:errorGroup] server = indexr_1_ip_addr:9997 [tcpout:successGroup] server = indexer_2_ip_addr:9997 props.conf: [test] TRANSFORMS-routing=errorRouting,successRouting transforms.conf: [errorRouting] REGEX=error DEST_KEY=_TCP_ROUTING FORMAT=errorGroup [successRouting] REGEX=success DEST_KEY=_TCP_ROUTING FORMAT=successGroup Indexer_1 & Indexer_2: Configured the port 9997 on both indexers. Note: I tried below steps to troubleshoot or identify the issue, but no luck so far: 1. Checked if the forwarder has any inactive forwards or receivers through CLI: Active forwards: indexr_1_ip_addr:9997 indexr_2_ip_addr:9997 Configured but inactive forwards: None 2. Check the splunkd.log on the forwarder to see if there are any errors related to data forwarding: No errors 3. Checked the Security Group rules (Inbound and Outbound) in AWS console: Port 9997 is enabled for both Inbound and Outbound traffic. 4. All EC2 Instances running Splunk are on the same Security Group in AWS. 5. Tried to Ping both Indexers from HF. But, no response. Can someone please help me with this issue as I'm stuck and unable to figure out what is the root cause of the issue. Also, I'm using the same security group for both HF and UF with same Inbound and Outbound rules, but I can only see the logs sent from UF and not seeing the logs/events from my HF. I'm not sure what I am missing here to resolve or fix the issue to see the logs/events from HF in my Indexers. Thank you!
Hi,   I am trying to update IT Essentials Work (ITEW) from v4.13.0 to v4.15.0. There is no much documentation on ITEW so I am using the documentation for IT Service Intelligence (ITSI). My understa... See more...
Hi,   I am trying to update IT Essentials Work (ITEW) from v4.13.0 to v4.15.0. There is no much documentation on ITEW so I am using the documentation for IT Service Intelligence (ITSI). My understanding is ITEW is the free version of ITSI without premium features. I checked the prerequisites, updated as per the documentation  1. Stopped the service (it is a single instance - SH) 2. Extracted the new version into $SPLUNK_HOME/etc/apps 3. Started the service Then opened the app on the search head to proceed with the update, it passed the pre checks, got to    2025-02-19 14:30:56,637+1100 process:654449 thread:MainThread INFO [itsi.migration] [itsi_migration_log:43] [info] UI: Running prechecker: EAPrechecks   I left it for 30 minutes or so, then checked the status by running    curl -k -u admin:changeme -X GET https://localhost:8089/servicesNS/nobody/SA-ITOA/migration/info   and it was is_running: false Cannot see anything alarming when I check the status. Tried several times and every time it is the same. Checked the permissions, Troubleshooting documentation, restarted the service - still could not update. Please, advise
Anyone please help in this 
I just stumbled on this looking for something else,  and wanted to say you can dynamically colour both the background and the major value:    "visualizations": { "viz_UVeH0JP5": { ... See more...
I just stumbled on this looking for something else,  and wanted to say you can dynamically colour both the background and the major value:    "visualizations": { "viz_UVeH0JP5": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_VyZ1EWbM" }, "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)", "backgroundColor": "> majorValue | matchValue(backgroungColorEditorConfig)" }, "context": { "majorColorEditorConfig": [ { "match": "NotDropped", "value": "#2f8811" }, { "match": "Dropped", "value": "#ffffff" } ], "backgroundColorEditorConfig": [ { "match": "NotDropped", "value": "#000000" }, { "match": "Dropped", "value": "#2f8811" } ] } },   You probably can't do it though the UI, though. I rarely use it, so I'm not sure. 
Has anyone been able to use the "| sendalert risk ..." command from the correlation search query, even when the search returns no results? I currently need to do this, but when there are no result... See more...
Has anyone been able to use the "| sendalert risk ..." command from the correlation search query, even when the search returns no results? I currently need to do this, but when there are no results I get the message "Error in 'sendalert' command: Alert script returned error code 3." Is there a way to truncate (abort) the sendalert command when there are no results?
| stats max(notificationId) by iNumber
I cannot download from the results pane as well. Here Export is greyed out. I am the admin but i cannot download as csv.    
Hello, I have this search query      index=app iNumber IN (72061271737983, 72061271737983, 72061274477906, 72061277215167) | stats count by notificationId, iNumber       This results in mult... See more...
Hello, I have this search query      index=app iNumber IN (72061271737983, 72061271737983, 72061274477906, 72061277215167) | stats count by notificationId, iNumber       This results in multiple notificationIds coming in for each iNumber in this list. What im trying to find out is the max notificationId value per iNumber, and output that list. Is there a way to do that?  somthing like: iNumber (Max)NotificationId 72061271737983 12345 72061271737983  78787   Thank you!
You can download the results in the panels to csv but clicking the download button. Is that what you mean?
Thank you @livehybrid !!!!! I knew I was dosing off at the end of the day.... LOL
Good Morning @livehybrid  Just wanted to wrap my head around the logic 2025-02-13 Yes 2025-02-14 Yes 2025-02-15 Yes So is the yes mean that it will alert on those dates? hence... See more...
Good Morning @livehybrid  Just wanted to wrap my head around the logic 2025-02-13 Yes 2025-02-14 Yes 2025-02-15 Yes So is the yes mean that it will alert on those dates? hence returning an result? Also lets say for example If an alert fired on the 15th and the lookuptable has the date 2025-02-15 Does it mute the next day? so the 16th ?wont get alerted? (if it falls within mon~thursday) where Friday it will jump to monday to mute so it would look like this 2025-02-15 no and  instead of displaying that in a event it will not actually return any results? If I want to only add 1 day would I change it like this?   | eval mute_date = if(day_of_week == Date + 86400)   all the best!
Hi @mbasharat  Add "| spath input=body" to your SPL - this will then extract the fields within the body JSON key as keyval fields in your results. Please let me know how you get on and consider acc... See more...
Hi @mbasharat  Add "| spath input=body" to your SPL - this will then extract the fields within the body JSON key as keyval fields in your results. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @vksplunk1  By default your KV store files will be stored in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo - so if you have a backup of this directory you may be able to get the data back based on th... See more...
Hi @vksplunk1  By default your KV store files will be stored in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo - so if you have a backup of this directory you may be able to get the data back based on the time it was backed up, however I would look at recovering this to a different / test server rather than your production instance as it isnt possible to pick and choose which files to restore.  Therefore you might need to recover the whole backup and then take a backup from the recovered data before restoring. Do you have other lookups also? This will affect those if you overwrite from an old backup. You could try this approach, and depending on the size of your lost KV Store lookup. you could export it from the restored backup, then load it back into the KV Store on your production instance using a mixture of |inputlookup <restoredData.csv> | outputlookup <OriginalLookupName> Do you think this might work for your situation? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi - We have accidentally deleted kvstore with outputlookup command. We do not have a backup from splunk.   How to Restore KVStore from back up of  splunk home( /opt/splunk )directory backup