All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I would like to use a subsearch to literally paste a command into the SPL e.g.:     | makeresults [| makeresults | eval test="|eval t1 = \"hello\"" | return $test]     and for it to be ... See more...
Hello, I would like to use a subsearch to literally paste a command into the SPL e.g.:     | makeresults [| makeresults | eval test="|eval t1 = \"hello\"" | return $test]     and for it to be equivalent to     | makeresults | eval t1 = "hello"       Is this possible?
Hello, I've made a dashboard with dashboard studio and uploaded some images. The issue I'm facing is that these images are not visible to other users with other roles. They have the dashboard permiss... See more...
Hello, I've made a dashboard with dashboard studio and uploaded some images. The issue I'm facing is that these images are not visible to other users with other roles. They have the dashboard permission as well and can access it, the only issue is with images. How can I fix this?
I already tried the default sequence ([\r\n]+), as I wrote in the original post. After your suggestion, I checked it one more time, but I still see multiline events.
@Gunnar Did you find any alternate solution? I have a similar problem and looking for a solution. 
H, is there a way to turn an input playbook to an app? I have a playbook that gets an input, and does something. I am looking for a way to make it an app so there will be no need to activate anothe... See more...
H, is there a way to turn an input playbook to an app? I have a playbook that gets an input, and does something. I am looking for a way to make it an app so there will be no need to activate another playbook in order to make it work. also, it is a bit problematic to run a former playbook to activate the input playbook, because then I would have to edit the former playbook with the relevant input, while with app it would be much simpler    thank you in advance
Did you find a fix for that?
Hi @ejwade, you should already have these extractions because usually Splunk identifies the groups fieldname=fieldvalue. Anyway, please try this regex: name\=\"(?<name>[^\"]*)\",value\=\[*\"(?<val... See more...
Hi @ejwade, you should already have these extractions because usually Splunk identifies the groups fieldname=fieldvalue. Anyway, please try this regex: name\=\"(?<name>[^\"]*)\",value\=\[*\"(?<values>[^\"]*) that you can test at https://regex101.com/r/PEszES/1 Ciao. Giuseppe
Hi, How we can apply the color for the respective fields in this dashboard. source code : <title>Top Web Category blocked</title> <search> <query>index=es_web action=blocked host= * sourcetype= ... See more...
Hi, How we can apply the color for the respective fields in this dashboard. source code : <title>Top Web Category blocked</title> <search> <query>index=es_web action=blocked host= * sourcetype= * | stats count by category | sort 5 -count</query> <earliest>$time_range_token.earliest$</earliest> <latest>$time_range_token.latest$</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.chart">bar</option> <option name="charting.backgroundColor">#00FFFF</option> <option name="charting.fontColor">#000000</option> <option name="charting.foregroundColor">#000000</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"online-storage-and-backup":0x333333,"unknown":0xd93f3c,"streaming-media":0xf58f39,"internet-communications-and-telephony":0xf7bc38,"insufficient-content":0xeeeeee}</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form> output: need a different  colors for all the fields, how we can achieve this  thanks
Hi @VK18, this is a precise requirement from Splunk related to the fact that the HF could have overloading managing more tham 50 clients and its normal job as HF. Probably you see that CPUs and RAM... See more...
Hi @VK18, this is a precise requirement from Splunk related to the fact that the HF could have overloading managing more tham 50 clients and its normal job as HF. Probably you see that CPUs and RAM aren't overloaded on this HF, but they could be with relevant impact on your log ingestion because DS and HF use very muche the network interface and managing 100 clients is heavy for that server's network interface. In addition, you spoke of 100 clients not few more than 50, so I'd avoid to use both the roles in the same machine. If it's mandatory for you this architecture, give more resources (CPUs and RAM to that server and analyze the network activity because this could be the bottleneck. As last consideration, if you'll have problems on that server, this will be the first annotation from Splunk Support. Ciao. Giuseppe
I need your support in finding a way to integrate web apps hosted in the Azure cloud with Splunk. As i tried using many add-ons from Splunk base but I did not find this option so please if anyone kno... See more...
I need your support in finding a way to integrate web apps hosted in the Azure cloud with Splunk. As i tried using many add-ons from Splunk base but I did not find this option so please if anyone knows how to integrate to get the logs, let me know. Thank you all.
Well, the time comes in OK, so it obviously found the correct timestamp. Without the confoiguration I  get some of the fields in the json but not the timestamp. With the configuration I only get the ... See more...
Well, the time comes in OK, so it obviously found the correct timestamp. Without the confoiguration I  get some of the fields in the json but not the timestamp. With the configuration I only get the timestamp. Of course, If I move the timestamp to the beginning, then I get the correct mappings... but I don't want to do that.
Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the ... See more...
Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the indexers from the UFs. Additionally, we plan to deploy the deployment server on the same virtual machine (VM). Based on the documentation, it appears that a deployment server can be co-located with another Splunk Enterprise instance as long as the deployment client count remains at or below 50. We would like to better understand the rationale behind this limitation of 50 clients and why it is not possible to manage more than 50 clients by adding another component of Splunk Enterprise ?   Regards VK
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else aw... See more...
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else away but those two capture groups (e.g., "\1: \2"). Here are some sample events. name="Building",value="Southwest",descendants_action="success",operation="OVERRIDE" name="Building",value=["Northeast","Northwest"],descendants_action="failure",operation="OVERRIDE" name="Building",value="Southeast",descendants_action="success",operation="OVERRIDE" name="Building",value="Northwest" name="Building",value="Northwest",operation="OVERRIDE" So far I just have this. ^name=\"(.*)\",value=\[?(.*)\]? Any ideas?
Finally, it works! Thank you very much.
Hello, I also want to encrypt personal data in the collected data at index time and decrypt it at search time, just like human96. It seems that implementing decryption at search time can be done wi... See more...
Hello, I also want to encrypt personal data in the collected data at index time and decrypt it at search time, just like human96. It seems that implementing decryption at search time can be done with a custom command, but I'm currently researching and contemplating how to encrypt a specific field at index time. Have you implemented a method to encrypt a specific field at index time? Your insights would be greatly appreciated.
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a tes... See more...
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a test to convert the data to json but I am still seeing mix matches between the log file size and the data being ingested into Splunk.  I am checking on the daily ingestion vs daily log size 
Perfect!  exactly what I was after.   Many thanks.
Tell the stats command you want the values of field4. |fields job_no, field2, field4 |dedup job_no, field2 |stats count, dc(field4) AS dc_field4, values(field4) as field4 by job_no |eval calc=dc_fie... See more...
Tell the stats command you want the values of field4. |fields job_no, field2, field4 |dedup job_no, field2 |stats count, dc(field4) AS dc_field4, values(field4) as field4 by job_no |eval calc=dc_field4 * count  
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timecha... See more...
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timechart distinct_count(LoginUserID) partial=false This works and the resulting graph appears to be correct for 120 mins resulting in 5min time buckets.  Then if I shorten the time period down to 60 mins resulting in 1 min buckets then I have a question. In the 120 min graph with 5 min buckets @ 6:40-6:45 I have 318 Distinct Users but in the 90 min graph with 1 min buckets each 1 min bucket has 136, 144, 142, 131, 117 Distinct Users. I understand that a user can be active one minute and inactive the next min or two and then active again on the 4th/5th min which is what is happening? My question is how to get this to show across the one minute bin's users that were active in the previous 5, 1 min buckets resulting in a # that represents users that are logged in and not just active ? I believe I can add minspan=5min as a kludge but am wondering if there is a way to get this do what im trying to show at the 1min span ? I believe what I need to do is run two queries the first one as is above, then use an append that will query for  events from -5min to -10min.   But, from what I have been trying it either is not working or not doing it correctly. Basically im trying to find those userID's that are active in the first time bucket (1 min) that were also active in the previous time bucket(s) then do a distinct_count(..) on the usersID's collected from both queries ?
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search he... See more...
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search heads on failure. Unfortunately with Autoscaling, it does NOT re-use the IP of the failed instance on the new instance, probably due to other use cases of up- and down-scaling. So new replacement instances will always have new/different IPs than that of the failed instance. Starting with a healthy cluster with an elected search head captain and RAFT running, I terminated one search head. During the minute or two that it took AWS Autoscaling to replace the search head instance, RAFT stopped and there was no captain. I was then unable to add a NEW third search head to the cluster. OK, so then I created a similar scenario but this time had Autoscaling issue the commands to force one of the remaining two search heads to be an UN-ELECTED static captain - and then confirmed this had worked; I had two search heads, one being a captain. In the Splunk documentation, it mentions using a Static Captain for DR. However, when I again tried to add the new instance as the third search head, I again received the error that RAFT was not running, there was no cluster, and therefore the member could not be added! So what is Splunk's recommendation for Disaster Recovery in this situation? I understand this is a chicken-and-egg scenario, but how are you expected to recover if you can't get a third search head in place in order TO recover? It seems counter-intuitive that Splunk would disallow adding a third search head, especially with the static search head captain in place. There are some configurable timeout parameters in server.conf in the [shcluster] stanza - would increasing any of these values keep the SHC in place long enough for Autoscaling to replace that third search head instance such that it can then join the SHC? If so, which timeouts should I use, and which values would be appropriate that they wouldn't interfere with the day-in, day-out usage? I'm stuck on this and haven't been able to progress any further. Any and all help is greatly appreciated!