All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggere... See more...
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggered by splunk with date time.   Thanks Abhineet Kumar
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the difference... See more...
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the differences are: use_stats_v2 = [fixed-width | <boolean>] * Specifies whether to use the v2 stats processor. * When set to 'fixed-width', the Splunk software uses the v2 stats processor for operations that do not require the allocation of extra memory for new events that match certain combinations of group-by keys in memory. Operations that cause the Splunk software to use v1 stats processing include the 'eventstats' and 'streamstats' commands, usage of wildcards, and stats functions such as list(), values(), and dc(). * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true and  stats = <boolean> * This setting determines whether the stats processor uses the required field optimization methods of Stats V2, or if it falls back to the older, less optimized version of required field optimization that was used prior to Stats v2. * This setting only applies when 'use_stats_v2' is set to 'true' or 'fixed-width' in 'limits.conf' * When Stats v2 is enabled and this setting is set to 'true', the stats processor uses the Stats v2 version of required field optimization. * When Stats v2 is enabled and this setting is set to 'false' the stats processor falls back to the older version of required field optimization. * Do not change this setting unless instructed to do so by Splunk support. * Default: false In few words, the difference is that in V2 it isn't required allocation of extra memory used, this is maintained for eventats and streamstats. Ciao. Giuseppe  
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "t... See more...
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "truth". 2) Between baselines, small updates are ingested, specifying diffs from the previous baseline. A baseline would be around ~1 GB, and the small updates would be ~1 MB every few days. Queries on this index will build a "current state" by querying the baseline + the updates since. This would require a baseline + updates to be kept in warm buckets.  I was wondering what would be the best indexes.conf configuration for this case? My initial thought was: frozenTimePeriodInSecs=7776000 # 90 days to keep ~3 baselines maxDataSize=2000 # max size of a baseline maxWarmDBCount=30 The reason I set maxWarmDBCount to 30 was in case of an update every day, and automatic rolling from hot to warm bucket. If hot buckets can stay hot for multiple days, I could reduce this number. Any inputs? Thanks!    
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.... See more...
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures   Thanks Marta
Hi @Marta88, what do you mean with "stats command version 1 and version 2"? stats command is more or less always the same. Ciao. Giuseppe
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".ro... See more...
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".row) | foreach 2 3 4 [ eval C&lt;&lt;FIELD&gt;&gt;=random() % 10000 ]
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </... See more...
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </init> <search id="base_data"> <query>index="_internal" earliest=-15m@m |stats values(source) as Values by sourcetype | eval column_expansion=Values </query> </search> <row> <panel> <table> <search base="base_data"> <query> | eval Values=if(row=$tok_row$, column_expansion, mvindex(column_expansion, 0, 0)) </query> </search> <fields>"Values","sourcetype"</fields> <drilldown> <eval token="tok_row">if($row.row$=$tok_row$, 0, $row.row$)</eval> </drilldown> </table> </panel> </row> </form>
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Cust... See more...
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Customer=coalesce(standard_customer_name, Customer) so, if your Customer does not exist in the lookup, it will return a null standard_customer_name and then the coalesce will just use the original Customer
Search is waiting for input is a token problem, please post your XML search and drilldown segment
Hi, I would like to know the difference between version 1 and version 2 of the stats command. Thank you Kind regards Marta
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including th... See more...
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including the splunk HFs, UF, etc - so I do want to drop this at the index level). I've placed this code in the etc system local props.conf and transforms.conf files - is that correct? Doesn't seem to drop it either for IP address or hostname. Thanks.
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwardi... See more...
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwarding to the Splunk Enterprise instance. The log file I'm ingesting is JSON format, but each line contains a SYSLOG prefix. This prefix seems to be stripped out by Splunk Enterprise from what I can tell. The sourcetype of the log is a custom type which is NOT explicitly defined on the Splunk Enteprise server. Since the log is JSON, no explicit field extraction is needed. the log events are just JSON messages and are properly extracted. Now I've changed the outputs.conf on the EC2 machine to send the logs to Splunk Cloud. Nothing else changed. Splunk Cloud indexes the events and the SYSLOG header shows up in Splunk Cloud. Thats why the event doesn't seem to be recognized as JSON and field extraction is not working. Any idea how to tell Splunk Cloud to Strip the SYSLOG header from these events? And especially... why this was working apparently automatically on the Splunk Enterprise side? Both Splunkn installations have the Splunk Add-On for Unix installed, which seems to contain configuration for stripping SYSLOG headers from events. But I don't understand yet, how these come into action. My inputs.conf: [monitor:///var/log/slbs/tors_access.log] disabled = false blacklist = \.(gz|bz2|z|zip)$ sourcetype = tors_access index = torsindex There is no props.conf or transform.conf on the EC2 machine with the Splunk forwarder for this app (and if so, that should have kicked in when I change the output to Splunk Cloud).
Can anyone help please?
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. ... See more...
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. it is still showing the multiple values and when i clicked on the row it is displaying "search is waiting for input.." message. The results i am displaying is the values() through stats, please let me know if that could be the reason for not working or anything else?
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=... See more...
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=headers,json=temp_container,verify=False, timeout=60) After the review and feedback from the splunk team, I included a code in my html that will make users to enter the path to  their SSL certificate (optional field). After this, I made changes to my python script so that if the user has entered the path, the code below will be executed else the one above. response=requests.post(url=url,headers=headers,json=temp_container, timeout=60,verify=certloc) certloc is the path to the certificate. However, I am getting the same response as above from the review team on the code where I have kept verify=False. If I remove this code from the python then it will make it mandatory for the users to enter the path to the SSL certificate? In that case, do users have to use their own certificate and place the certificate inside default folder of the package or do we generate the certificate, and place it inside the default folder and then package it before distributing it.  Can the same certificate be used by all app users when we distribute the package? In our case, every customer has their own instance of our product just like every user has their own Splunk instance.
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WIL... See more...
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WILDCARD" in Match Type Save Ciao. Giuseppe
Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! ... See more...
Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! Anyway, if you use the Splunk Add-On for AWS, you should have the parser to read these logs and extract all the fields, so you can put them in a table as you want, but without changing the original source. Ciao. Giuseppe
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h... See more...
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats values(st) AS sourcetype sum(b) AS volumeB by _time idx | rename idx AS index | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao ... See more...
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splun... See more...
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splunk on that particular SH. If i check the SH cluster status only 3 servers are showing now. Splunk installed version: 9.0.4.1 for error visibility Please find the attached.  Regards, Siva.