All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @raysonjoberts  I have the same needs as you, has your problem been resolved? if so can you give me the script thank you
@vikas_kone  Is this resolved or still open. If not then you can try this, Service Analyzer → Your Service → KPI , Thresholding -> select Per-Entity Thresholds You’ll see a new table with a list ... See more...
@vikas_kone  Is this resolved or still open. If not then you can try this, Service Analyzer → Your Service → KPI , Thresholding -> select Per-Entity Thresholds You’ll see a new table with a list of all your entities. Thanks, Sujit
 
I would like the first letters of my name to be capitalised
@JRW You solution worked for me like charm. I spend more than 6 hours troubleshooting until i stumble on yours and decided to try it out even though its not marked as the preferred solution. Thank you
This is most common issue if you don’t see and can’t use it from other options. I’m not sure/haven’t checked I last times what options you can set with this app. In most cases with small or mid sized... See more...
This is most common issue if you don’t see and can’t use it from other options. I’m not sure/haven’t checked I last times what options you can set with this app. In most cases with small or mid sized lookups this works (enough) well, but if you have huge ones and/or you are needing e.g. accelerations then it’s easier to define those via conf files.
You should do it exactly this way. Remember sticky bit on LB side to forward index ack questions into the correct backend. Even it’s possible to add hec and tokens to indexers and HF I always prefer... See more...
You should do it exactly this way. Remember sticky bit on LB side to forward index ack questions into the correct backend. Even it’s possible to add hec and tokens to indexers and HF I always prefer to use separate HFs behind LB. The reason for that is adding and modifying tokens and other configurations. It’s quite often required a reboot for those nodes. This is much easier and faster operation on HF than indexers. Also risk to duplicate or lost some events are smaller.
HI @danielbb  You need to create the lookup definition once you have created the KV Store collection in the lookup editor app. Go to Settings->Lookups->Lookup Definitions. Create a new one as belo... See more...
HI @danielbb  You need to create the lookup definition once you have created the KV Store collection in the lookup editor app. Go to Settings->Lookups->Lookup Definitions. Create a new one as below - filling in the relevant details:   Then you should be able to search it using |inputlookup Note: I generally try and call the definition something different to the collection/kv store name but you do not need to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@isoutamo That is interesting, as I've found *more* IP changes on my customers with Victoria stacks than Classic stacks due to indexer scaling, however if you have a stable ingestion then I guess thi... See more...
@isoutamo That is interesting, as I've found *more* IP changes on my customers with Victoria stacks than Classic stacks due to indexer scaling, however if you have a stable ingestion then I guess this shouldnt change much.  
I have been using verbose mode for the event details. I have not used appendpipe, though, so I will look into that. Thank you!
As usually this depends on your environment. How many clients you have, how many apps, how many serverclass etc. Are there lot of changes etc. As said splunk recommend dedicated server when you have ... See more...
As usually this depends on your environment. How many clients you have, how many apps, how many serverclass etc. Are there lot of changes etc. As said splunk recommend dedicated server when you have more than 50 clients. In real life if you have small/medium size environment, you don’t need 12cpu 12gb node. You could start with smaller virtual node, monitor it and increase it size when needed.
Hi This is likely a Windows OS or disk issue unrelated to the Splunk installer itself. Installing Splunk should not cause your entire D:\ drive to become inaccessible. To recover access to your D:\... See more...
Hi This is likely a Windows OS or disk issue unrelated to the Splunk installer itself. Installing Splunk should not cause your entire D:\ drive to become inaccessible. To recover access to your D:\ drive: Open Disk Management (diskmgmt.msc) and check if the drive is visible, healthy, and has a drive letter assigned. If the drive appears offline or unallocated, investigate hardware or file system corruption. Use chkdsk to scan and repair: Regarding the missing desktop shortcut, this sounds like it’s a minor issue likely due to permission or installer hiccup. You can manually create a shortcut from D:\Program Files\Splunk\bin\splunk.exe (or wherever you installed). Although there is a chance the installation did not actually succeed? Verify this once you have resolved your access to the drive.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you don’t need to send those into onprem too then just add SCP uf package to those and all logs will sent to SCP only. If you are needing those on both env then you must add that UF and addition ... See more...
If you don’t need to send those into onprem too then just add SCP uf package to those and all logs will sent to SCP only. If you are needing those on both env then you must add that UF and addition transforms or inputs.conf where you are defining which logs goes to SCP and which to onprem and which one to both. But remember that sending those to both means double license usage.
As already said you should use fqdn based fw opening to those dns names.  In real life those ips are probably quite stable as long as you are using Victoria stack and don’t change its region. Based ... See more...
As already said you should use fqdn based fw opening to those dns names.  In real life those ips are probably quite stable as long as you are using Victoria stack and don’t change its region. Based on this you could try to use ip based fw openings too, but time by time that could break your traffic if/when those ips have changed.
Two additions. When you are playing with data and creating your final queries you could use verbose mode. Then you can see all events in events tab even after using transform commands. Another comman... See more...
Two additions. When you are playing with data and creating your final queries you could use verbose mode. Then you can see all events in events tab even after using transform commands. Another command which you could use to calculate subtotals is appendpipe.
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shor... See more...
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shortcut, but after the installation completed, the shortcut did not appear. I also changed the default installation directory from C:\ to my D:\ drive. After the installation, I noticed that my entire D drive became inaccessible, and I’m now getting the following error:             Location is not available             D:\ is not accessible. I'm unsure what went wrong during the installation. Not only did the shortcut not appear, but now I can't even access my D drive. Has anyone else experienced this issue? Could this be due to a permission error, drive formatting, or something caused by the installer? Any guidance on how I can fix or recover my D drive and properly install Splunk would be greatly appreciated. Thanks in advance!
Simple and very helpful. Thank you.
Thank you! This is great material, especially for a Splunk beginner. I will digest this for a bit.
The data sent by httpout is _not_ your normal HEC. True, it uses the same port and the same tokens but the transmission method is different. It's actually more of a s2s protocol embedded in HTTP requ... See more...
The data sent by httpout is _not_ your normal HEC. True, it uses the same port and the same tokens but the transmission method is different. It's actually more of a s2s protocol embedded in HTTP requests. Therefore I wouldn't be very optimistic about "downgrading" HTTP version/features on the fly.
Could you elaborate on the dashboard you are using? Is it a custom dashboard that sends HTTP requests to SOAR to create new containers and artifacts, or are you using the Event Forwarding settings of... See more...
Could you elaborate on the dashboard you are using? Is it a custom dashboard that sends HTTP requests to SOAR to create new containers and artifacts, or are you using the Event Forwarding settings of the Splunk App for SOAR Export?   If you are using the Event Forwarding settings, then check which field has the checkbox to group, as this will cause results with the same grouping field to be added to the same container in SOAR.