All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@JRW You solution worked for me like charm. I spend more than 6 hours troubleshooting until i stumble on yours and decided to try it out even though its not marked as the preferred solution. Thank you
This is most common issue if you don’t see and can’t use it from other options. I’m not sure/haven’t checked I last times what options you can set with this app. In most cases with small or mid sized... See more...
This is most common issue if you don’t see and can’t use it from other options. I’m not sure/haven’t checked I last times what options you can set with this app. In most cases with small or mid sized lookups this works (enough) well, but if you have huge ones and/or you are needing e.g. accelerations then it’s easier to define those via conf files.
You should do it exactly this way. Remember sticky bit on LB side to forward index ack questions into the correct backend. Even it’s possible to add hec and tokens to indexers and HF I always prefer... See more...
You should do it exactly this way. Remember sticky bit on LB side to forward index ack questions into the correct backend. Even it’s possible to add hec and tokens to indexers and HF I always prefer to use separate HFs behind LB. The reason for that is adding and modifying tokens and other configurations. It’s quite often required a reboot for those nodes. This is much easier and faster operation on HF than indexers. Also risk to duplicate or lost some events are smaller.
HI @danielbb  You need to create the lookup definition once you have created the KV Store collection in the lookup editor app. Go to Settings->Lookups->Lookup Definitions. Create a new one as belo... See more...
HI @danielbb  You need to create the lookup definition once you have created the KV Store collection in the lookup editor app. Go to Settings->Lookups->Lookup Definitions. Create a new one as below - filling in the relevant details:   Then you should be able to search it using |inputlookup Note: I generally try and call the definition something different to the collection/kv store name but you do not need to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@isoutamo That is interesting, as I've found *more* IP changes on my customers with Victoria stacks than Classic stacks due to indexer scaling, however if you have a stable ingestion then I guess thi... See more...
@isoutamo That is interesting, as I've found *more* IP changes on my customers with Victoria stacks than Classic stacks due to indexer scaling, however if you have a stable ingestion then I guess this shouldnt change much.  
I have been using verbose mode for the event details. I have not used appendpipe, though, so I will look into that. Thank you!
As usually this depends on your environment. How many clients you have, how many apps, how many serverclass etc. Are there lot of changes etc. As said splunk recommend dedicated server when you have ... See more...
As usually this depends on your environment. How many clients you have, how many apps, how many serverclass etc. Are there lot of changes etc. As said splunk recommend dedicated server when you have more than 50 clients. In real life if you have small/medium size environment, you don’t need 12cpu 12gb node. You could start with smaller virtual node, monitor it and increase it size when needed.
Hi This is likely a Windows OS or disk issue unrelated to the Splunk installer itself. Installing Splunk should not cause your entire D:\ drive to become inaccessible. To recover access to your D:\... See more...
Hi This is likely a Windows OS or disk issue unrelated to the Splunk installer itself. Installing Splunk should not cause your entire D:\ drive to become inaccessible. To recover access to your D:\ drive: Open Disk Management (diskmgmt.msc) and check if the drive is visible, healthy, and has a drive letter assigned. If the drive appears offline or unallocated, investigate hardware or file system corruption. Use chkdsk to scan and repair: Regarding the missing desktop shortcut, this sounds like it’s a minor issue likely due to permission or installer hiccup. You can manually create a shortcut from D:\Program Files\Splunk\bin\splunk.exe (or wherever you installed). Although there is a chance the installation did not actually succeed? Verify this once you have resolved your access to the drive.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you don’t need to send those into onprem too then just add SCP uf package to those and all logs will sent to SCP only. If you are needing those on both env then you must add that UF and addition ... See more...
If you don’t need to send those into onprem too then just add SCP uf package to those and all logs will sent to SCP only. If you are needing those on both env then you must add that UF and addition transforms or inputs.conf where you are defining which logs goes to SCP and which to onprem and which one to both. But remember that sending those to both means double license usage.
As already said you should use fqdn based fw opening to those dns names.  In real life those ips are probably quite stable as long as you are using Victoria stack and don’t change its region. Based ... See more...
As already said you should use fqdn based fw opening to those dns names.  In real life those ips are probably quite stable as long as you are using Victoria stack and don’t change its region. Based on this you could try to use ip based fw openings too, but time by time that could break your traffic if/when those ips have changed.
Two additions. When you are playing with data and creating your final queries you could use verbose mode. Then you can see all events in events tab even after using transform commands. Another comman... See more...
Two additions. When you are playing with data and creating your final queries you could use verbose mode. Then you can see all events in events tab even after using transform commands. Another command which you could use to calculate subtotals is appendpipe.
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shor... See more...
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shortcut, but after the installation completed, the shortcut did not appear. I also changed the default installation directory from C:\ to my D:\ drive. After the installation, I noticed that my entire D drive became inaccessible, and I’m now getting the following error:             Location is not available             D:\ is not accessible. I'm unsure what went wrong during the installation. Not only did the shortcut not appear, but now I can't even access my D drive. Has anyone else experienced this issue? Could this be due to a permission error, drive formatting, or something caused by the installer? Any guidance on how I can fix or recover my D drive and properly install Splunk would be greatly appreciated. Thanks in advance!
Simple and very helpful. Thank you.
Thank you! This is great material, especially for a Splunk beginner. I will digest this for a bit.
The data sent by httpout is _not_ your normal HEC. True, it uses the same port and the same tokens but the transmission method is different. It's actually more of a s2s protocol embedded in HTTP requ... See more...
The data sent by httpout is _not_ your normal HEC. True, it uses the same port and the same tokens but the transmission method is different. It's actually more of a s2s protocol embedded in HTTP requests. Therefore I wouldn't be very optimistic about "downgrading" HTTP version/features on the fly.
Could you elaborate on the dashboard you are using? Is it a custom dashboard that sends HTTP requests to SOAR to create new containers and artifacts, or are you using the Event Forwarding settings of... See more...
Could you elaborate on the dashboard you are using? Is it a custom dashboard that sends HTTP requests to SOAR to create new containers and artifacts, or are you using the Event Forwarding settings of the Splunk App for SOAR Export?   If you are using the Event Forwarding settings, then check which field has the checkbox to group, as this will cause results with the same grouping field to be added to the same container in SOAR.
Typically that's a result of wrong scope or insufficient access - your lookup is either private or exported only to the app you've created it in but you're searching from another app (typically the s... See more...
Typically that's a result of wrong scope or insufficient access - your lookup is either private or exported only to the app you've created it in but you're searching from another app (typically the search app)
@ITWhisperer and @livehybrid . Both responses helped me understand the overall issue and I thank you both.   Another method that I worked on is to use 2 Regex expressions in props.conf: Regex 1 FA... See more...
@ITWhisperer and @livehybrid . Both responses helped me understand the overall issue and I thank you both.   Another method that I worked on is to use 2 Regex expressions in props.conf: Regex 1 FAILED.+\:\s(?<LogFile>.+)(\n)(?<Reason1>.+(\n).+)          - that grabs "Host key verification failed lost connection" OR "You are attempting to access a system owned by XYZ" into the Reason1 field The second Regex:  Agreement\sfor\sdetails\.(\n)(?<Reason2>.+) That grabs: "scp: /logs/rsyslog/server02/: Not a directory" into the Reason2 field In the search there is a case statement to make it work | eval Message=case(like(Reason1,"%You are%"),Reason2,1==1,Reason1) It sounds a bit inefficient, but it is working for the report. Thank you both again.
Splunk is not Excel But seriously. For Splunk every result row is... well, a separate row. Depending on the actual use case you could cheat a bit but the way to do so would depend on the detaile... See more...
Splunk is not Excel But seriously. For Splunk every result row is... well, a separate row. Depending on the actual use case you could cheat a bit but the way to do so would depend on the detailed desired outcome. You could do something like <your_initial_search> | stats values(_raw) as "Event Details" by UID (and maybe do some magic with custom CSS in dashboard to "un-align" the table a bit). But that will give you just raw events. If you want to have separate fields from those "content" events... that's gonna get tricky and ugly (and un-splunky because the result will not have any internal logical consistency and will be only for presentation purposes). An example using my windows events index: index=winevents This is just the base search - nothing to write home about | sort EventID That should also be pretty obvious - we want the events grouped by EventID field. You can add subsequent sort field(s) if you want them sorted within those groups. | streamstats window=1 current=f last(EventID) as previousID Now the magic starts. We're copying the EvenID value from previous event to the current one. The previous one is called previousID. | eval splittable=if(NOT EventID=previousID,mvappend("1","0"),0) If the current EventID is the same as previous one (which we carried over in last step) it means that it's not the first result with given EventID. If those values are different (or - in case of the very first result row, the previousID is empty; that's why the condition is in the form of NOT a=b instead of a!=b), this is a first row of results for given EventID. Depending on which case it is, we create a temporary field with either a single value (whether it's a zero, or anything else is not important; I just chose zero) or two values of which the second one must be the same as for the "not-first" row. We're doing this because Splunk cannot just arbitrarily add rows. So we're doing the trick with multiple values in one result (so called multivalue field) so we can split that result into two separate ones. And this we do by calling: | mvexpand splittable Now the first row for each unique EventID, which we marked with two values in the field called "splittable" got split into two separate rows with one value each. The row which had just one value was left unchanged. What is also important is that the order of the split results remains the same as the order of the values in the field on which we're calling mvexpand. So now all that's left is to find the "header" row and clean all "non-header" values. And clean the "header" field (in our case the EventID field) for all "non-header" rows. | foreach *     [ eval <<FIELD>>=case(splittable=1 AND "<<FIELD>>"="EventID",EventID,splittable=1,"",splittable=0 AND "<<FIELD>>"="EventID","",1=1,<<FIELD>>) ] We may now remove the temporary fields which we don't need anymore (this step is optional if we're limiting displayed fields to a strictly defined set; if we just list all fields, we might want to do this so we don't drag temporary fields along) | fields - splittable previousID And now we can present the results as table with either | table EventID host _time field1 field2 and so on or simply | table EventID *   OK. So this exercise was fun but I wouldn't do that this way. After doing all this you're getting a set of results where you have no relationship between the EventID field from one result and the actual "contents" of the events in other results - you can't aggregate the data, (re)sort them or do anything else, maybe except some general statistics. This kind of result is unusable. As I said at the beginning - Splunk is not Excel and you can't "merge fields". The only way this could work would be if someone wrote a custom visualization which would do some JS magic comparing values from neighboring rows and fiddling with CSS but so far I don't think anyone did such thing.
| eventstats count as total by uid | where total > 4