All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi it’s hard to help you without more information about your environment and queries. You could try to look if this helps https://conf.splunk.com/files/2020/slides/TRU1761C.pdf There are many more... See more...
Hi it’s hard to help you without more information about your environment and queries. You could try to look if this helps https://conf.splunk.com/files/2020/slides/TRU1761C.pdf There are many more presentations which could help too? r. Ismo
Hi SanDeep, Thanks for watching the episode. Depending on what product you are trying to monitor your K8 cluster with will determine what agent you would need.  If you had AppD CSaaS for example th... See more...
Hi SanDeep, Thanks for watching the episode. Depending on what product you are trying to monitor your K8 cluster with will determine what agent you would need.  If you had AppD CSaaS for example then you would be correct, we would need to install a cluster agent. However, this video is on CCO (Cisco Cloud Observability) and COP (Cisco Observability Platform).  The agents in CCO are called collectors because they are making use of OpenTelemetry. I have written a guide on Cisco U that might help you - https://ondemandelearning.cisco.com/apollo-alpha/tc-cnao-app-auto-instrumentation/pages/1 There is also a guide for CSaaS - https://ondemandelearning.cisco.com/apollo-alpha/tc-appd-auto-apm-instrumentation/pages/1 
Hi you should look fieldformat tostring(var,”commas”) which fulfills your requirement. r. Ismo
Hi can you open little bit more what you’re meaning with this? Usually those are found from splunk’s internal logs. r. Ismo
Hi maybe you could use several row and panels with some reports and/or base and post searches? See more https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/Savedsearches r. Ismo
Hi @Fadil.CK, After inquiring about your question, someone from our Ops team did an SSL scan for Agent traffic on both of your instances and they didn't see anything reporting for TLS 1 and 1.1. ... See more...
Hi @Fadil.CK, After inquiring about your question, someone from our Ops team did an SSL scan for Agent traffic on both of your instances and they didn't see anything reporting for TLS 1 and 1.1. Was this just a hypothetical question? 
Hi guys can you please help me ? I'm trying to use a space as thousands separator and I can't, the max that I could it's a comma with this: eval value= if(value!="N/A",printf("%'d",value),v... See more...
Hi guys can you please help me ? I'm trying to use a space as thousands separator and I can't, the max that I could it's a comma with this: eval value= if(value!="N/A",printf("%'d",value),value) Result = 123,456 so I guess I can change it with a replace maybe, But then we have problem number 2 , When I try to sort the by value in the arrow of the column, the sort it isn't correct, and the bigger numbers are considerate as strings.  Can you guys help me solve this please ? Tell me if you need more things 
Have you try to escape # characters like \# ?
Hi @Brian.Lee, I found this in our Docs page if this helps at all.   
Hi until you give some sample data to us, it’s hard to tell exactly how to do it. Here is some ideas how to proceed with this case ... | rex "....(?<yourXML>....until it ends)...." | fields _time y... See more...
Hi until you give some sample data to us, it’s hard to tell exactly how to do it. Here is some ideas how to proceed with this case ... | rex "....(?<yourXML>....until it ends)...." | fields _time yourXML | xmlkv maxinputs=99999 yourXML .... r. Ismo  
Hi This could be the reason. Just hunting this kind of issues and still working with it. You should look limits.conf and it's kv-stanza. Also check what you have on your sourcetype (and/or host and... See more...
Hi This could be the reason. Just hunting this kind of issues and still working with it. You should look limits.conf and it's kv-stanza. Also check what you have on your sourcetype (and/or host and source) definition's TRUNCATE value. Both of those are affecting how many fields splunk automatically found. r. Ismo
Hello I have some linux systems that run in cron every day this line: /usr/bin/nmon -f -t -s 300 -c 288 -m /var/log/nmon/ As a result, I have one file per day with nmon metrics. These servers don'... See more...
Hello I have some linux systems that run in cron every day this line: /usr/bin/nmon -f -t -s 300 -c 288 -m /var/log/nmon/ As a result, I have one file per day with nmon metrics. These servers don't have communication with splunk. My question is, it is posible to ingests these files to NMON Splunk App to analyze them ? I supouse to manually load the files in the index nmon but I'm not sure if first I have to do something before. Thank you in advance
How big are the events?  Splunk default to 200 field extractions, IIRC.  Also, what type of data is it?  I've seen problems extracting JSON data, especially nested JSON.
It seems that the regex is not working, because the events are still arriving in index1, I tried different regexes but is the same
hi @gcusello , Thanks for your inputs on this.  Yeah , we have validated that "All fields" are selected in the fields drop down.  We are running the search in verbose mode. But, nothing helped...o... See more...
hi @gcusello , Thanks for your inputs on this.  Yeah , we have validated that "All fields" are selected in the fields drop down.  We are running the search in verbose mode. But, nothing helped...out event is very big and i am thinking if there is any limitation Splunk is hitting in showing up the fields. 
Hi @richgalloway , Thanks for you inputs on this.  We are running the search in verbose mode only, but this did no help us. The event we are dealing with is very big and we are thinking if Splunk... See more...
Hi @richgalloway , Thanks for you inputs on this.  We are running the search in verbose mode only, but this did no help us. The event we are dealing with is very big and we are thinking if Splunk is hitting any limitation in showing up all the fields in the left side panel selected fields and interested fields.
yes tried it out with all options already, with quote, without quote & double quotes. All are giving the same error. Error in 'mstats' command: Invalid token: sum(eval(if(calc:service.thaa_str... See more...
yes tried it out with all options already, with quote, without quote & double quotes. All are giving the same error. Error in 'mstats' command: Invalid token: sum(eval(if(calc:service.thaa_stress_requests_count_lr_tags>0  
Try without quotes around the field names (perhaps there is something significant about the colon? | mstats sum(eval(if(calc:service.thaa_stress_requests_count_lr_tags>0, calc:service.thaa_stress_re... See more...
Try without quotes around the field names (perhaps there is something significant about the colon? | mstats sum(eval(if(calc:service.thaa_stress_requests_count_lr_tags>0, calc:service.thaa_stress_requests_count_lr_tags, null()))) As "Count", avg(eval(if(calc:service.thaa_stress_requests_lr_tags>0, calc:service.thaa_stress_requests_lr_tags, null()))) As "Response" where index=itsi_im_metrics by Dimension.id
I am trying to get values from xml part of an event. The event starts with few lines than it has xml part, after that few more lines and another xml part. I want to extract at first only the parts of... See more...
I am trying to get values from xml part of an event. The event starts with few lines than it has xml part, after that few more lines and another xml part. I want to extract at first only the parts of the event that are in xml format.
We need more information. Are you trying to extract at search time or index time?  Are you trying keep the XML or discard it?  Please share a sanitized sample event or two.