All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Any luck with this?
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1... See more...
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1\"\s(?\d*)\s(?\d*)\"(?\S*)\"\"\w.*\"\s\S*(?web*\d*)\s\S*" | search sourceLBIP="*" responsetime="*" getorpost="*" uri="*" statuscode="*" responsesize="*" refereralURL="*" node="*" | eval responsetime1=responsetime/1000000 | stats count by responsetime1,node, responsesize, uri, _time, statuscode | sort -responsetime1 | head 1    I am trying to modify this query for more detailed information. I am able to get the top 1 URL which has highest response time. But I need the timechart partner to understand the responsetime trend for that speicifc URL for last 1 hour. Also, like to modify the script in a such a way where it sould provide me the timechart trend of any URL (top responsetime) for 1 hour. URL may not be same every time since it may change.
Hi @gjhaaland, if you run a search on _internal, did you have results? have you any messages from Splunk? Ciao. Giuseppe
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to... See more...
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to settings - indexing   _audit, _internal , _introspection,  _telemtry, _history + main area all of them is disabled. I also google, and it says that it perhaps has something to do identical id under db directory. We have same id on some files with .sentinel   example: db_123_345_12 db_123_345_12.rbsentinel    If I run following command: run netsat -an | grep 9997 we have many tcp session establised .    Have of course rebooted, restarted splunk server several times.  It does not help much.  Thanks in advance. Hope someone can give me a hint.    Rgds Geir    
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggere... See more...
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggered by splunk with date time.   Thanks Abhineet Kumar
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the difference... See more...
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the differences are: use_stats_v2 = [fixed-width | <boolean>] * Specifies whether to use the v2 stats processor. * When set to 'fixed-width', the Splunk software uses the v2 stats processor for operations that do not require the allocation of extra memory for new events that match certain combinations of group-by keys in memory. Operations that cause the Splunk software to use v1 stats processing include the 'eventstats' and 'streamstats' commands, usage of wildcards, and stats functions such as list(), values(), and dc(). * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true and  stats = <boolean> * This setting determines whether the stats processor uses the required field optimization methods of Stats V2, or if it falls back to the older, less optimized version of required field optimization that was used prior to Stats v2. * This setting only applies when 'use_stats_v2' is set to 'true' or 'fixed-width' in 'limits.conf' * When Stats v2 is enabled and this setting is set to 'true', the stats processor uses the Stats v2 version of required field optimization. * When Stats v2 is enabled and this setting is set to 'false' the stats processor falls back to the older version of required field optimization. * Do not change this setting unless instructed to do so by Splunk support. * Default: false In few words, the difference is that in V2 it isn't required allocation of extra memory used, this is maintained for eventats and streamstats. Ciao. Giuseppe  
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "t... See more...
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "truth". 2) Between baselines, small updates are ingested, specifying diffs from the previous baseline. A baseline would be around ~1 GB, and the small updates would be ~1 MB every few days. Queries on this index will build a "current state" by querying the baseline + the updates since. This would require a baseline + updates to be kept in warm buckets.  I was wondering what would be the best indexes.conf configuration for this case? My initial thought was: frozenTimePeriodInSecs=7776000 # 90 days to keep ~3 baselines maxDataSize=2000 # max size of a baseline maxWarmDBCount=30 The reason I set maxWarmDBCount to 30 was in case of an update every day, and automatic rolling from hot to warm bucket. If hot buckets can stay hot for multiple days, I could reduce this number. Any inputs? Thanks!    
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.... See more...
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures   Thanks Marta
Hi @Marta88, what do you mean with "stats command version 1 and version 2"? stats command is more or less always the same. Ciao. Giuseppe
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".ro... See more...
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".row) | foreach 2 3 4 [ eval C&lt;&lt;FIELD&gt;&gt;=random() % 10000 ]
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </... See more...
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </init> <search id="base_data"> <query>index="_internal" earliest=-15m@m |stats values(source) as Values by sourcetype | eval column_expansion=Values </query> </search> <row> <panel> <table> <search base="base_data"> <query> | eval Values=if(row=$tok_row$, column_expansion, mvindex(column_expansion, 0, 0)) </query> </search> <fields>"Values","sourcetype"</fields> <drilldown> <eval token="tok_row">if($row.row$=$tok_row$, 0, $row.row$)</eval> </drilldown> </table> </panel> </row> </form>
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Cust... See more...
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Customer=coalesce(standard_customer_name, Customer) so, if your Customer does not exist in the lookup, it will return a null standard_customer_name and then the coalesce will just use the original Customer
Search is waiting for input is a token problem, please post your XML search and drilldown segment
Hi, I would like to know the difference between version 1 and version 2 of the stats command. Thank you Kind regards Marta
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including th... See more...
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including the splunk HFs, UF, etc - so I do want to drop this at the index level). I've placed this code in the etc system local props.conf and transforms.conf files - is that correct? Doesn't seem to drop it either for IP address or hostname. Thanks.
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwardi... See more...
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwarding to the Splunk Enterprise instance. The log file I'm ingesting is JSON format, but each line contains a SYSLOG prefix. This prefix seems to be stripped out by Splunk Enterprise from what I can tell. The sourcetype of the log is a custom type which is NOT explicitly defined on the Splunk Enteprise server. Since the log is JSON, no explicit field extraction is needed. the log events are just JSON messages and are properly extracted. Now I've changed the outputs.conf on the EC2 machine to send the logs to Splunk Cloud. Nothing else changed. Splunk Cloud indexes the events and the SYSLOG header shows up in Splunk Cloud. Thats why the event doesn't seem to be recognized as JSON and field extraction is not working. Any idea how to tell Splunk Cloud to Strip the SYSLOG header from these events? And especially... why this was working apparently automatically on the Splunk Enterprise side? Both Splunkn installations have the Splunk Add-On for Unix installed, which seems to contain configuration for stripping SYSLOG headers from events. But I don't understand yet, how these come into action. My inputs.conf: [monitor:///var/log/slbs/tors_access.log] disabled = false blacklist = \.(gz|bz2|z|zip)$ sourcetype = tors_access index = torsindex There is no props.conf or transform.conf on the EC2 machine with the Splunk forwarder for this app (and if so, that should have kicked in when I change the output to Splunk Cloud).
Can anyone help please?
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. ... See more...
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. it is still showing the multiple values and when i clicked on the row it is displaying "search is waiting for input.." message. The results i am displaying is the values() through stats, please let me know if that could be the reason for not working or anything else?
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=... See more...
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=headers,json=temp_container,verify=False, timeout=60) After the review and feedback from the splunk team, I included a code in my html that will make users to enter the path to  their SSL certificate (optional field). After this, I made changes to my python script so that if the user has entered the path, the code below will be executed else the one above. response=requests.post(url=url,headers=headers,json=temp_container, timeout=60,verify=certloc) certloc is the path to the certificate. However, I am getting the same response as above from the review team on the code where I have kept verify=False. If I remove this code from the python then it will make it mandatory for the users to enter the path to the SSL certificate? In that case, do users have to use their own certificate and place the certificate inside default folder of the package or do we generate the certificate, and place it inside the default folder and then package it before distributing it.  Can the same certificate be used by all app users when we distribute the package? In our case, every customer has their own instance of our product just like every user has their own Splunk instance.
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WIL... See more...
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WILDCARD" in Match Type Save Ciao. Giuseppe