All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you're confident the sampling done for metrics.log will catch all of your UFs then the search looks good.
Did you end up finding a solution to this? I have a similar care and there is no info anywhere on the matter..
I’m not sure if there was a DMC or was this before it? If it was already published maybe there was Licensing views where you could try to see what sourc/host/sourcetype was cathode bursts? Another op... See more...
I’m not sure if there was a DMC or was this before it? If it was already published maybe there was Licensing views where you could try to see what sourc/host/sourcetype was cathode bursts? Another option was try to find SoS app which (maybe) could show this to you? And last option is try to look if this information has stored to _internal index? Worst case is that you must write your own report to check events’ lengths and calculate summaries based on that.
Hi, we have multiple services that we want to have filtered out from the journald. Is there a way to do the opposite of this stanza parameter? to exclude _SYSTEMD_UNIT=my.service journalctl-filt... See more...
Hi, we have multiple services that we want to have filtered out from the journald. Is there a way to do the opposite of this stanza parameter? to exclude _SYSTEMD_UNIT=my.service journalctl-filter =_SYSTEMD_UNIT=my.service    If that's not possible, what's the best way to do that?
Hey I've been working on a distributed Splunk environment, where in one of our indexes we have a very high cardinality "source" field (basically different for each event). I've noticed that using t... See more...
Hey I've been working on a distributed Splunk environment, where in one of our indexes we have a very high cardinality "source" field (basically different for each event). I've noticed that using tstats 'distinct_count' to count the number of sources, I am getting an incorrect result (far from one per event). The query looks something like: |tstats dc(source) where index=my_index   I've noticed that when I search on a smaller number of events (~100,000 instead of ~5,000,000), the result is correct. In addition, when using estdc I get a better result than dc (which is wildly wrong). Finally, when using stats instead of tstats, I get the correct value: index=my_index | stats dc(source)   Any ideas? My guess is that I'm hitting some memory barrier, but there is no indication of this.
      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | sta... See more...
      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | stats sum(b) as total_usage_bytes by h | eval total_usage_gb = round(total_usage_bytes/1024/1024/1024, 2) | fields - total_usage_bytes | addcoltotals label="Total" labelfield="h" total_usage_gb   I think this is what I wanted, unless someone thinks its inaccurate?  Please advise. TY  
That's exactly what i was looking for,  thanks for that.
Hello Everyone, I have a query where a user selects a time range in the timeticker Let say 10 november 08:30am to 10 novemeber 11:30am The user wants to only see the events for the last 5 minutes ... See more...
Hello Everyone, I have a query where a user selects a time range in the timeticker Let say 10 november 08:30am to 10 novemeber 11:30am The user wants to only see the events for the last 5 minutes  i.e from 10 novmeber 11:25am 10 novemeber 11:30am to look for errors in that 5 minutes He has two panels total errors in the the selected timeframe Total errors in the last 5mins of the selected timeframe I'm able to create panel 1 how to create panel 2 how Below search for panel 2 earliest=-5m  latest=$info_max_time$ index=newdata sourcetype=oracle source="/u0/DATA_COUNT.txt" loglevel="ERROR" |bin span=5m _time |stats dc(loglevel) by INSTANCE_NAME
Thank you for the reply.  I also looked at this log but it requires curating an exact list of the UFs, bc I have some pollution, e.g. h= HFs, SC4S, etc.  The license_usage log may be the best route i... See more...
Thank you for the reply.  I also looked at this log but it requires curating an exact list of the UFs, bc I have some pollution, e.g. h= HFs, SC4S, etc.  The license_usage log may be the best route if I can put together a lookup of just UFs.
Hello, can someone provide feedback on how I can change the color of my panel to transparent? Below is my code snippet. I'm not great with CSS or XML. I was using dashboard studio which was straight ... See more...
Hello, can someone provide feedback on how I can change the color of my panel to transparent? Below is my code snippet. I'm not great with CSS or XML. I was using dashboard studio which was straight forward on how to change but I'm back with classic for now.  <panel> <single> <title>Total First Time</title> <search base="base_search"> <query>|search Cur= $t_cur$ | bin _time span=$t_bin$ | stats sum(FirstTime) as sumFirstTime by Category</query> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel>  
The most accurate method would be to add up the size of _raw for each UF (host), but that would have terrible performance. Try using the license_usage log.  The h field is the host (UF) sending the ... See more...
The most accurate method would be to add up the size of _raw for each UF (host), but that would have terrible performance. Try using the license_usage log.  The h field is the host (UF) sending the data. index=_internal source=*license_usage.log | stats sum(b) as bytes by h | eval KB = bytes/1024 | rename h as UF | table UF KB
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related ... See more...
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related to that device. I then want to add each result into a JSON object. How can I achieve this in the most efficient way?
Thank you for your reply, do you have a method of querying to get an answer for my question? I am not finding the key logs containing UF data thruput or ingest information.  
If DATETIME_CONFIG is set to CURRENT then there is no need for the TIME_PREFIX or MAX_TIMESTAMP_LOOKAHEAD settings. The regexes do not match the sample data - the regex expects too many spaces.  Als... See more...
If DATETIME_CONFIG is set to CURRENT then there is no need for the TIME_PREFIX or MAX_TIMESTAMP_LOOKAHEAD settings. The regexes do not match the sample data - the regex expects too many spaces.  Also, there is no BREAK_ONLY_AFTER setting.  Perhaps you mean MUST_BREAK_AFTER.  Try these settings. DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = TRUE MUST_BREAK_AFTER = [\r\n]+#{5}\s+END\sSTATUS\s+\#{5}  
The Metrics log is a sample of events, not an audit log.
The timewrap command requires a timechart command be used before it.  Use stats if you need to, but be sure to call timechart before calling timewrap.
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like ... See more...
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like to see the Avg of the previous week as well. Business Transaction Name  |  Avg. Response Time | Avg. Response Time Baseline Also, is there any way to set Thresholds for Status Colors on tables? My goal is, I need to create a weekly Schedule Dashboard and from the options I'm finding that AppD can do, it's very limited.  Any ideas given would be greatly appreciated. Thanks for the help, Tom
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am ha... See more...
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am having w/ the following query, is that the volume is not realistic, and I am probably misunderstanding the _internal metrics log.  Perhaps the kb field is not the correct field to sum as data thruput?     index=_internal source=*metrics.log group=tcpin_connections fwdType=uf | eval GB = kb/(1024*1024) | stats sum(GB) as GB       Any advice appreciated. Thank you
Usualy debugging involves just adding commands one by one and seeing if they yield the result you expect. So just remove the last spath and see if you have separate "bundle" in each row. Then just d... See more...
Usualy debugging involves just adding commands one by one and seeing if they yield the result you expect. So just remove the last spath and see if you have separate "bundle" in each row. Then just do | spath input=logs  
we ended up doing a full system restore from backup to the days prior to the start of the warning messages in splunk.   so now search works without error and licensing shows normal,  and as expecte... See more...
we ended up doing a full system restore from backup to the days prior to the start of the warning messages in splunk.   so now search works without error and licensing shows normal,  and as expected, we lose data from the days after backup to the point of restore.  so for example, if I try to search for "yesterday" i get no results.  but that is the price paid for restoring from backup. I guess the question that remains is : how can we in the future "see" what syslog client (or clients) is causing a license warning to be triggered ?  perhaps some security appliance sent an extended (many hours or more) burst of syslogs above the normal rate...but is there an easy way to see that in the splunk web ui ? Regards, jason