All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | sta... See more...
      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | stats sum(b) as total_usage_bytes by h | eval total_usage_gb = round(total_usage_bytes/1024/1024/1024, 2) | fields - total_usage_bytes | addcoltotals label="Total" labelfield="h" total_usage_gb   I think this is what I wanted, unless someone thinks its inaccurate?  Please advise. TY  
That's exactly what i was looking for,  thanks for that.
Hello Everyone, I have a query where a user selects a time range in the timeticker Let say 10 november 08:30am to 10 novemeber 11:30am The user wants to only see the events for the last 5 minutes ... See more...
Hello Everyone, I have a query where a user selects a time range in the timeticker Let say 10 november 08:30am to 10 novemeber 11:30am The user wants to only see the events for the last 5 minutes  i.e from 10 novmeber 11:25am 10 novemeber 11:30am to look for errors in that 5 minutes He has two panels total errors in the the selected timeframe Total errors in the last 5mins of the selected timeframe I'm able to create panel 1 how to create panel 2 how Below search for panel 2 earliest=-5m  latest=$info_max_time$ index=newdata sourcetype=oracle source="/u0/DATA_COUNT.txt" loglevel="ERROR" |bin span=5m _time |stats dc(loglevel) by INSTANCE_NAME
Thank you for the reply.  I also looked at this log but it requires curating an exact list of the UFs, bc I have some pollution, e.g. h= HFs, SC4S, etc.  The license_usage log may be the best route i... See more...
Thank you for the reply.  I also looked at this log but it requires curating an exact list of the UFs, bc I have some pollution, e.g. h= HFs, SC4S, etc.  The license_usage log may be the best route if I can put together a lookup of just UFs.
Hello, can someone provide feedback on how I can change the color of my panel to transparent? Below is my code snippet. I'm not great with CSS or XML. I was using dashboard studio which was straight ... See more...
Hello, can someone provide feedback on how I can change the color of my panel to transparent? Below is my code snippet. I'm not great with CSS or XML. I was using dashboard studio which was straight forward on how to change but I'm back with classic for now.  <panel> <single> <title>Total First Time</title> <search base="base_search"> <query>|search Cur= $t_cur$ | bin _time span=$t_bin$ | stats sum(FirstTime) as sumFirstTime by Category</query> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel>  
The most accurate method would be to add up the size of _raw for each UF (host), but that would have terrible performance. Try using the license_usage log.  The h field is the host (UF) sending the ... See more...
The most accurate method would be to add up the size of _raw for each UF (host), but that would have terrible performance. Try using the license_usage log.  The h field is the host (UF) sending the data. index=_internal source=*license_usage.log | stats sum(b) as bytes by h | eval KB = bytes/1024 | rename h as UF | table UF KB
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related ... See more...
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related to that device. I then want to add each result into a JSON object. How can I achieve this in the most efficient way?
Thank you for your reply, do you have a method of querying to get an answer for my question? I am not finding the key logs containing UF data thruput or ingest information.  
If DATETIME_CONFIG is set to CURRENT then there is no need for the TIME_PREFIX or MAX_TIMESTAMP_LOOKAHEAD settings. The regexes do not match the sample data - the regex expects too many spaces.  Als... See more...
If DATETIME_CONFIG is set to CURRENT then there is no need for the TIME_PREFIX or MAX_TIMESTAMP_LOOKAHEAD settings. The regexes do not match the sample data - the regex expects too many spaces.  Also, there is no BREAK_ONLY_AFTER setting.  Perhaps you mean MUST_BREAK_AFTER.  Try these settings. DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = TRUE MUST_BREAK_AFTER = [\r\n]+#{5}\s+END\sSTATUS\s+\#{5}  
The Metrics log is a sample of events, not an audit log.
The timewrap command requires a timechart command be used before it.  Use stats if you need to, but be sure to call timechart before calling timewrap.
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like ... See more...
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like to see the Avg of the previous week as well. Business Transaction Name  |  Avg. Response Time | Avg. Response Time Baseline Also, is there any way to set Thresholds for Status Colors on tables? My goal is, I need to create a weekly Schedule Dashboard and from the options I'm finding that AppD can do, it's very limited.  Any ideas given would be greatly appreciated. Thanks for the help, Tom
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am ha... See more...
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am having w/ the following query, is that the volume is not realistic, and I am probably misunderstanding the _internal metrics log.  Perhaps the kb field is not the correct field to sum as data thruput?     index=_internal source=*metrics.log group=tcpin_connections fwdType=uf | eval GB = kb/(1024*1024) | stats sum(GB) as GB       Any advice appreciated. Thank you
Usualy debugging involves just adding commands one by one and seeing if they yield the result you expect. So just remove the last spath and see if you have separate "bundle" in each row. Then just d... See more...
Usualy debugging involves just adding commands one by one and seeing if they yield the result you expect. So just remove the last spath and see if you have separate "bundle" in each row. Then just do | spath input=logs  
we ended up doing a full system restore from backup to the days prior to the start of the warning messages in splunk.   so now search works without error and licensing shows normal,  and as expecte... See more...
we ended up doing a full system restore from backup to the days prior to the start of the warning messages in splunk.   so now search works without error and licensing shows normal,  and as expected, we lose data from the days after backup to the point of restore.  so for example, if I try to search for "yesterday" i get no results.  but that is the price paid for restoring from backup. I guess the question that remains is : how can we in the future "see" what syslog client (or clients) is causing a license warning to be triggered ?  perhaps some security appliance sent an extended (many hours or more) burst of syslogs above the normal rate...but is there an easy way to see that in the splunk web ui ? Regards, jason
"Do mvexpand to split it into separate results. Then do spath" Need more detail please Is there a way to see what the mvexpand returns? feels like debugging queries is next to impossible when spath... See more...
"Do mvexpand to split it into separate results. Then do spath" Need more detail please Is there a way to see what the mvexpand returns? feels like debugging queries is next to impossible when spath-ing the mv results what exactly am inputting for? index="factory_mtp_events" | spath "logs{}" output=logs | mvexpand logs | spath input=logs.test_name|  
We are running 9.1.2
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multi... See more...
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multiple other URLs to compare, is it possible to compare it with stats? otherwise with timechart it creates a lots of colums with url avg and counts. <query> URL=* [| makeresults | addinfo | eval row=mvrange(0,3) | mvexpand row | eval row=if(row=2,8,row) | eval earliest=relative_time(info_min_time,"-".row."d") | eval latest=relative_time(info_max_time,"-".row."d") | table earliest latest] | eval URL=replace(URL,"/*\d+","/{id}") | bucket _time span=15m | stats avg(responseTime) count by URL _time| sort -_time URL | timewrap d
Thank you, it works now. I am going to monitor for one more day before I mark your response as accepted solution.   But in the meanwhile, could you kindly explain how the below lines work please? ... See more...
Thank you, it works now. I am going to monitor for one more day before I mark your response as accepted solution.   But in the meanwhile, could you kindly explain how the below lines work please? | eval {Status}=Status | fields - Status | stats values(*) as * | eval Status=coalesce(FILE_DELIVERED, FILE_NOT_DELIVERED) | fields Status  I started guessing/ play with it, but certain lines I am unable to understand what it does/ how it fits here to provide me the desired result TBH.
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STAT... See more...
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STATUS ##### #LAST UPDATE  :  Tue,  28  Nov  2023  11:00:16  +0000 Abcstatus.status=ok Abcstatus.lastupdate=17xxxxxxxx555     ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### XXXX' XXXX   ###  xyxStatus  ### XXX XXX XXX . . . . So on....   But in splunk below lines are coming as a seperate events instead of being part of one complete event: ##### FIRST STATUS #####  - is coming as seperate event Abcstatus.status=ok  - this is also coming as a separate event   Below all events coming as one event which is correct and the above two lines should also be part of this one event: Abcstatus.lastupdate=17xxxxxxxx555 ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### . . . So on.... #####   END STATUS  #####   Below is my props: DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=TRUE BREAK_ONLY_AFTER = ^#{5}\s{6}END\sSTATUS\s{6}\#{5} MUST_NOT_BREAK_AFTER=\#{5}\s{5}BEGIN\sSTATUS\s{5}\#{5} TIME_PREFIX=^#\w+\s\w+\w+\s:\s MAX_TIMESTAMP_LOOKAHEAD=200   Can you please help me with the issue?