All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "t... See more...
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "truth". 2) Between baselines, small updates are ingested, specifying diffs from the previous baseline. A baseline would be around ~1 GB, and the small updates would be ~1 MB every few days. Queries on this index will build a "current state" by querying the baseline + the updates since. This would require a baseline + updates to be kept in warm buckets.  I was wondering what would be the best indexes.conf configuration for this case? My initial thought was: frozenTimePeriodInSecs=7776000 # 90 days to keep ~3 baselines maxDataSize=2000 # max size of a baseline maxWarmDBCount=30 The reason I set maxWarmDBCount to 30 was in case of an update every day, and automatic rolling from hot to warm bucket. If hot buckets can stay hot for multiple days, I could reduce this number. Any inputs? Thanks!    
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.... See more...
With the latest version of Splunk, I heard that there will be some changes, as reported on this link. But it is not specified which are the changes. https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures   Thanks Marta
Hi @Marta88, what do you mean with "stats command version 1 and version 2"? stats command is more or less always the same. Ciao. Giuseppe
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".ro... See more...
Help me understand why this 2 lines are for? I do have other fields other than values and sourcetype, need to apply this expansion for 2nd column.(column name= Values) | eval C1=mvmap(C1, C1."_R".row) | foreach 2 3 4 [ eval C<<FIELD>>=random() % 10000 ]
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </... See more...
Here is something similar to what i have tried. Please let me know where i might be making mistake. <form version="1.1" theme="dark"> <label>test</label> <init> <set token="tok_row">0</set> </init> <search id="base_data"> <query>index="_internal" earliest=-15m@m |stats values(source) as Values by sourcetype | eval column_expansion=Values </query> </search> <row> <panel> <table> <search base="base_data"> <query> | eval Values=if(row=$tok_row$, column_expansion, mvindex(column_expansion, 0, 0)) </query> </search> <fields>"Values","sourcetype"</fields> <drilldown> <eval token="tok_row">if($row.row$=$tok_row$, 0, $row.row$)</eval> </drilldown> </table> </panel> </row> </form>
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Cust... See more...
You are overwriting Customer so if your lookup is not found, it will overwrite Customer Do it like this | lookup customer_lookup customer_name as Customer output standard_customer_name | eval Customer=coalesce(standard_customer_name, Customer) so, if your Customer does not exist in the lookup, it will return a null standard_customer_name and then the coalesce will just use the original Customer
Search is waiting for input is a token problem, please post your XML search and drilldown segment
Hi, I would like to know the difference between version 1 and version 2 of the stats command. Thank you Kind regards Marta
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including th... See more...
Hi, I've tried this and it does not work. I need to block all data being written to our indexers from a set of IPs (network security devices that try and find compromises on our servers including the splunk HFs, UF, etc - so I do want to drop this at the index level). I've placed this code in the etc system local props.conf and transforms.conf files - is that correct? Doesn't seem to drop it either for IP address or hostname. Thanks.
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwardi... See more...
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwarding to the Splunk Enterprise instance. The log file I'm ingesting is JSON format, but each line contains a SYSLOG prefix. This prefix seems to be stripped out by Splunk Enterprise from what I can tell. The sourcetype of the log is a custom type which is NOT explicitly defined on the Splunk Enteprise server. Since the log is JSON, no explicit field extraction is needed. the log events are just JSON messages and are properly extracted. Now I've changed the outputs.conf on the EC2 machine to send the logs to Splunk Cloud. Nothing else changed. Splunk Cloud indexes the events and the SYSLOG header shows up in Splunk Cloud. Thats why the event doesn't seem to be recognized as JSON and field extraction is not working. Any idea how to tell Splunk Cloud to Strip the SYSLOG header from these events? And especially... why this was working apparently automatically on the Splunk Enterprise side? Both Splunkn installations have the Splunk Add-On for Unix installed, which seems to contain configuration for stripping SYSLOG headers from events. But I don't understand yet, how these come into action. My inputs.conf: [monitor:///var/log/slbs/tors_access.log] disabled = false blacklist = \.(gz|bz2|z|zip)$ sourcetype = tors_access index = torsindex There is no props.conf or transform.conf on the EC2 machine with the Splunk forwarder for this app (and if so, that should have kicked in when I change the output to Splunk Cloud).
Can anyone help please?
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. ... See more...
@bowesmana  This is exactly what i was looking for, and its excellent. Thank you for your response! I tried your query its working great but when i implement the same to my query it is not working. it is still showing the multiple values and when i clicked on the row it is displaying "search is waiting for input.." message. The results i am displaying is the values() through stats, please let me know if that could be the reason for not working or anything else?
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=... See more...
I am also facing a similar problem while submitting my splunk addon app to splunk. In my case, I am making post request to my software using the code below: response=requests.post(url=url,headers=headers,json=temp_container,verify=False, timeout=60) After the review and feedback from the splunk team, I included a code in my html that will make users to enter the path to  their SSL certificate (optional field). After this, I made changes to my python script so that if the user has entered the path, the code below will be executed else the one above. response=requests.post(url=url,headers=headers,json=temp_container, timeout=60,verify=certloc) certloc is the path to the certificate. However, I am getting the same response as above from the review team on the code where I have kept verify=False. If I remove this code from the python then it will make it mandatory for the users to enter the path to the SSL certificate? In that case, do users have to use their own certificate and place the certificate inside default folder of the package or do we generate the certificate, and place it inside the default folder and then package it before distributing it.  Can the same certificate be used by all app users when we distribute the package? In our case, every customer has their own instance of our product just like every user has their own Splunk instance.
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WIL... See more...
Hi @MScottFoley , only to complete the solution from @PickleRick that's perfect, you have to: go in [Settings > Lookups > Lookup definitions] choose the lookup flag Advanced Options insert "WILDCARD" in Match Type Save Ciao. Giuseppe
Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! ... See more...
Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! Anyway, if you use the Splunk Add-On for AWS, you should have the parser to read these logs and extract all the fields, so you can put them in a table as you want, but without changing the original source. Ciao. Giuseppe
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h... See more...
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats values(st) AS sourcetype sum(b) AS volumeB by _time idx | rename idx AS index | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao ... See more...
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splun... See more...
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splunk on that particular SH. If i check the SH cluster status only 3 servers are showing now. Splunk installed version: 9.0.4.1 for error visibility Please find the attached.  Regards, Siva.
Hi @gcusello  I tried having below query which works if i select goodsdevelopment in the 1st dropdown , i get options pertained to airbag. if i select materialdomain in the 1st dropdown , i shou... See more...
Hi @gcusello  I tried having below query which works if i select goodsdevelopment in the 1st dropdown , i get options pertained to airbag. if i select materialdomain in the 1st dropdown , i should get options pertained to material,sm. 1.If I want use data entity dropdown to be multi select since domain can have multiple data entity…how the below query  need to be modified? ...I am using 3 different inputtoken  for inbound query&3 different outputtoken for outbound query 2.Also how do i auto cleared the existing search result pannel  whenver new domain is selected  Query used:       <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults           | eval goodsdevelopment="a",materialdomain="b,c",costsummary="d"</query>         </search>         <change>           <condition match="$label$==&quot;a&quot;">             <set token="inputToken">test</set>             <set token="outputToken">test1</set>           </condition>           <condition match="$label$==&quot;c&quot;">             <set token="inputToken">dev</set>             <set token="outputToken">dev1</set> <condition match="$label$==&quot;m&quot;"> <set token="inputToken">qa</set> <set token="outputToken">qa1</set> </condition> </change> ------ <row> <panel> <html id="messagecount"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">INBOUND </h2> </html> </panel> </row> <row> <panel><table> <search> <query>index=$indexToken1$ source IN ("/*-*-*-$inputToken$")  | timechart count by ObjectType```| stats count by ObjectType</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> ------<row> <panel> </style> <h2 id="user">outBOUND </h2> </html> <chart> <search> <query>index=$indexToken$ source IN ("/*e-f-$outputToken$-*-","*g-$outputToken$-h","i-$outputToken$-j") </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search>
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level fil... See more...
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level filtered { [-]    line: xxxxxxxxx - - [16/Sep/2023:23:59:59 +0000] "GET /health HTTP/1.1" 200 236 "-" "ELB-HealthChecker/2.0" "-"    source: stdout    tag: xxxxxxxxxxx } Show as raw text host = xxx source = xxx source = xxx sourcetype = xxxx   We would like to make changes in Splunk to ensure the events are in a better-formatted standard as following: Sep 19 03:27:09 ip-xxx.xxxx xx[16151]: xxx ERROR xx - DIST:xx.xx BAS:8 NID:w-xxxxxx RID:b FID:bxxxx WSID:xxxx host = xxx level = ERROR source = xxx sourcetype =  xxx  We do have log forwarder rule configured (logs for other services are all formatted as above) . May I get some helps to reformat logs? Much appreciated!