All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello -  I have a table with several columns: Host Src IP Dest IP Src Port Dest Port myHost 10.0.0.1 10.0.0.2 50000 80   I would like to have cell based... See more...
Hello -  I have a table with several columns: Host Src IP Dest IP Src Port Dest Port myHost 10.0.0.1 10.0.0.2 50000 80   I would like to have cell based drills downs.  For example, Host would drill down into a dashboard called host_detail.xml, the rest of the columns would fill the value of the clicked cell to the appropriate filter token. The tokens are called src_ip_tok, dest_ip_tok, src_port_tok, and dest_port_tok.  How would I accomplish this? Thank you.
Splunk documentation said "fillnull command is a distributable streaming command when a field-list is specified. When no field-list is specified, the fillnull command fits into the dataset proces... See more...
Splunk documentation said "fillnull command is a distributable streaming command when a field-list is specified. When no field-list is specified, the fillnull command fits into the dataset processing type"   I wonder why it works as dataset processing if no fields are specified. The results are all the same anyway, but there must be a reason. Thanks for letting us know.
Hi community, a few month ago I have overtaken our Splunk cluster from a colleague who quit his job. Now I have the situation that we dismantle some application server which has an universal forwar... See more...
Hi community, a few month ago I have overtaken our Splunk cluster from a colleague who quit his job. Now I have the situation that we dismantle some application server which has an universal forwarder installed. What are the recommended steps to unsubscribe the forwarder at the Management- & Deploymentserver? Apps and serverclasses are not affected. These are still needed. Thanks in advance for your support. Armin
Can anyone explain me the steps to be followed to convert advanced xml dashboards to simple xml dashboards.
I want to display the number of sent data in certain time in the dashboard. I think the best way is with "Single Value". How can I display the number of search results of a search in the dashboard? ... See more...
I want to display the number of sent data in certain time in the dashboard. I think the best way is with "Single Value". How can I display the number of search results of a search in the dashboard? For example my search ("message.additionalInfo.attributes.properties.receiver-market-partner-id"=12345678) finds 1500 events. How can I display the 1500 in the dashboard as a single value? Thanks a lot! Translated with www.DeepL.com/Translator (free version)
I saw a question on the internet while searching for answers for a separate question and a few comments below regarding the correct answer for that. Now, I am confused as to what  should have been th... See more...
I saw a question on the internet while searching for answers for a separate question and a few comments below regarding the correct answer for that. Now, I am confused as to what  should have been the correct answer. This was the question. This file has been manually created on a universal forwarder: /opt/splunkforwarder/etc/apps/my_TA/local/inputs.conf [monitor:///var/log/messages] sourcetype=syslog index=syslog A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new inputs.conf file: /opt/splunk/etc/deployment-apps/my_TA/local/inputs.conf [monitor:///var/log/maillog] sourcetype=maillog index=syslog Which file is now monitored? /var/log/maillog or both /var/log/maillog and /var/log/messages
I'm using lookup but don't know how to do a partial match instead of an exact match Example: 10.20.30.40 is in the list, and I want to get the result of URL=https://10.20.30.40~, is that possible?
Hello, documentation shows JSON format as a: metadata fields, events field with additional data in it. Format events for HTTP Event Collector - Splunk Documentation   My question is how importa... See more...
Hello, documentation shows JSON format as a: metadata fields, events field with additional data in it. Format events for HTTP Event Collector - Splunk Documentation   My question is how important is to preserve this structure?  Can you remove "event" nesting? That's how events looks in Splunk right now, I have to press on a "+" sign to see the actual message.   If I remove the "event" nesting I can see the main message without extra actions.   P.S. if this is of any importance, data is being transferred to Splunk via TCP, not HTTP.    
As we can see below the two events contain multiple results. But when I try to export it as csv all these events get merged into a single row one after the other. Currently merged output in for one ... See more...
As we can see below the two events contain multiple results. But when I try to export it as csv all these events get merged into a single row one after the other. Currently merged output in for one event --->  result1 result2 result3 result4 But I want the data to be exported in csv as it is (i.e all the results in different rows)      
Hi    I want to disable a few logs from source. How can I do that. We have a server which forwards OS logs along with application logs. both are being forwarded to different indexes. Now we wan... See more...
Hi    I want to disable a few logs from source. How can I do that. We have a server which forwards OS logs along with application logs. both are being forwarded to different indexes. Now we want to disable application log index. so we want to stop log forwarding from source server itself.
Everytime when we reboot the server, splunkd service is not starting automatically eventhough statup type is Automatic. OS version is Windows server 2019 UF agent version is 8.2.5    
All,  I have a simple bash script which pulls down some data from git and restarts a service. I'd like to give a button to my SOC on a Splunk dashboard that they can just click. I can put the script... See more...
All,  I have a simple bash script which pulls down some data from git and restarts a service. I'd like to give a button to my SOC on a Splunk dashboard that they can just click. I can put the script anywhere really. I remember seeing that if I should be able to execute script from splunk/bin from dashboards but I having a hard time finding an example.  Someone have an example/recipe they can point me to?   
The ability to have a *nix UF run under a non-root user but still be able to have it read files was introduced with v9.0.0 of the UF (https://docs.splunk.com/Documentation/Forwarder/9.0.0/Forwarder/I... See more...
The ability to have a *nix UF run under a non-root user but still be able to have it read files was introduced with v9.0.0 of the UF (https://docs.splunk.com/Documentation/Forwarder/9.0.0/Forwarder/Installleastprivileged) Is there a way that I, as a Splunk admin, could see which (if any) POSIX capabilities (CAP_DAC_READ_SEARCH - and potentially also CAP_NET_ADMIN and CAP_NET_RAW) the various forwarders are running under/with?  I've had a look at index=_internal to see if the UF generates anything during start-up but I haven't found anything.  
Indexer rebooted no-gracefully. After reboot Splunk starts generating crash files shortly after restart. Spent the last two days running fsck repair on all buckets. doesn't seem to have helped. N... See more...
Indexer rebooted no-gracefully. After reboot Splunk starts generating crash files shortly after restart. Spent the last two days running fsck repair on all buckets. doesn't seem to have helped. No relevant errors in the splunkd.log. Crash log files: crash-2022-09-07-14:45:07.log crash-2022-09-07-14:45:15.log crash-2022-09-07-14:45:24.log crash-2022-09-07-14:45:32.log crash-2022-09-07-14:45:40.log Crash log: every crash log has the same patterns as below by changing Crashing threads, such as IndexerTPoolWorker-2, IndexerTPoolWorker-4, IndexerTPoolWorker-7 and the like.. [build 87344edfcdb4] 2022-09-07 13:31:01 Received fatal signal 6 (Aborted) on PID 193171. Cause: Signal sent by PID 193171 running under UID 53292. Crashing thread: IndexerTPoolWorker-2 Backtrace (PIC build): [0x00007EFDA0A27387] gsignal + 55 (libc.so.6 + 0x36387) [0x00007EFDA0A28A78] abort + 328 (libc.so.6 + 0x37A78) [0x00007EFDA0A201A6] ? (libc.so.6 + 0x2F1A6) [0x00007EFDA0A20252] ? (libc.so.6 + 0x2F252) [0x000056097778BA2C] ReadableJournalSliceDirectory::findEventTimeRange(int*, int*, bool) ... Libc abort message: splunkd: /opt/splunk/src/pipeline/indexer/JournalSlice.cpp:1780: bool ReadableJournalSliceDirectory::findEventTimeRange(st_time_t*, st_time_t*, bool): Assertion `tell() == pos' failed.
Hi All, I have a lookup table table1.csv with following fields: - index sourcetype host last_seen I have a custom index: idx1 which has following fields: - orig_index orig_sourcetype orig_... See more...
Hi All, I have a lookup table table1.csv with following fields: - index sourcetype host last_seen I have a custom index: idx1 which has following fields: - orig_index orig_sourcetype orig_host I need to search each host value from lookup table in the custom index and fetch the max(_time) and then store that value against the same host in last_seen.  I tried the below SPL to build the SPL, but it is not fetching any results: - |inputlookup table1.csv |eval index=lower(index) |eval host=lower(host) |eval sourcetype=lower(sourcetype) |table index, host, sourcetype |rename index AS orig_index, host AS orig_host, sourcetype AS orig_sourcetype |format |eval searchq=search |eval searchq="index=\"idx1\"".searchq."|stats max(_time) AS last_seen BY orig_index, orig_sourcetype, orig_host" |search searchq However, when I used  |fields searchq It gave a proper SPL as the result: - index="idx1" (orig_host="1.1.1.1" AND orig_index="xxx" AND orig_sourcetype="sourcetype1") OR (orig_host="1.1.1.2" AND orig_index="xxx" AND orig_sourcetype="sourcetype2"))|stats max(_time) AS last_time BY orig_index, orig_sourcetype, orig_host And when I run the above resulting SPL as separate search, I get the proper results.  Thus, please share if there is a way to correct the above approach or if some different approach can help to build the solution. Thank you
I have a number of hosts sending logs "in the future". I've configured my indexer's props.conf to adjust the TZ for the select few problem children and restarted the indexer. How can I immediately... See more...
I have a number of hosts sending logs "in the future". I've configured my indexer's props.conf to adjust the TZ for the select few problem children and restarted the indexer. How can I immediately verify my changes have put these host's new events in the correct TZ (meaning, no longer in the future) Basically the existing "future events" are making the timeline noisy and I can't see where (or perhaps *when*) new events are coming in. I could wait several hours for them to clear out, but that's not ideal.
We are monitoring log files that rotate multiple times daily.  We have wildcards specified in the monitor command, but do not in the source command.  Our issue arises when the software that generates... See more...
We are monitoring log files that rotate multiple times daily.  We have wildcards specified in the monitor command, but do not in the source command.  Our issue arises when the software that generates the logs is updated and we have to update the paths in our inputs.conf. Here is a sample stanza: [monitor:///apps/psprdcs/pt8.58/config/sprdcs/appserv/SPRDCS/LOGS/APPSRV_*] index=peoplesoft sourcetype=peoplesoft_appsrv source=/apps/psprdcs/pt8.58/config/sprdcs/appserv/SPRDCS/LOGS/APPSRV.LOG What we would like to do is something along these lines so version changes don't require Splunk changes: [monitor:///apps/psprd*/.../APPSRV_*] index=peoplesoft sourcetype=peoplesoft_appsrv source=/apps/psprd*/.../APPSRV.LOG   I have notes from my predecessor that we can't do that because you cannot specify a wildcard on the source line.  We want the full pathname for the source. Is this limitation still in effect or can it be done? Thanks In Advance! Jeff
Need to extract P302 P1 P2 with a single regular ex I build (?<Par>P[1-9][0-9]*) but when I run this in splunk it only captures first (P302)  [SearchBroker - XXX] - [submitSearch] INFO: XXX [] - su... See more...
Need to extract P302 P1 P2 with a single regular ex I build (?<Par>P[1-9][0-9]*) but when I run this in splunk it only captures first (P302)  [SearchBroker - XXX] - [submitSearch] INFO: XXX [] - submitSearch time=36 pTime={P302=11,P1=7,P301=13,P2=24,P3=23,P4=31,P5=25,P6=23,P300=13,P7=23,P8=24,P9=24,P10=21,P12=24,P11=23,P1000=1,P14=26,P13=24,P16=21,P15=20,P18=20,P17=23} pProcessTime={P302p=10,P1p=6,P301p=12,P2p=23,P3p=22,P4p=30,P5p=24,P6p=23,P300p=13,P7p=23,P8p=24,P9p=24,P10p=21,P12p=23,P11p=22,P1000p=0,P14p=26,P13p=23,P16p=20,P15p=20,P18p=20,P17p=23} pWaitTime=
I've installed the forwarded on several other domain controllers in our environment but these last 2 keep failing, throwing the all too enigmatic "setup ended prematurely" error.  "Like a F-18 bro!" ... See more...
I've installed the forwarded on several other domain controllers in our environment but these last 2 keep failing, throwing the all too enigmatic "setup ended prematurely" error.  "Like a F-18 bro!" They are Windows Server 2019 9.0 forwarder 64-bit installer Regardless that the log states "SplunkForwarder already exists"; there is no current installation of the forwarder (but I have attempted it several times) The logs don't seem to have any intel I find useful, but maybe you all have a better secret decoder ring? msiexec.log: splunk.log: Other than a few of these types of details "input type=perfmon because it already exists" still unsure of the problem: 12:51:47 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/outputs/tcp/server "name=REDACTED:9997" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:47 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 170 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">REDACTED:9997 forwarded-server already present</msg> </messages> </response> 12:51:47 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost lookup_host=localhost^&logs=Application^&logs=Security^&logs=System^&logs=ForwardedEvents^&logs=Setup >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 200 OK Date: Thu, 08 Sep 2022 16:51:49 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 4477 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>win-event-log-collections</title> <id>/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections</id> <updated>2022-09-08T12:51:49-04:00</updated> <generator build="6818ac46f2ec" version="9.0.0"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_new" rel="create"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_reload" rel="_reload"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>localhost</title> <id>/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="list"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost/_reload" rel="_reload"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="edit"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">SplunkUniversalForwarder</s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> <s:item>power</s:item> <s:item>splunk-system-role</s:item> <s:item>user</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">1</s:key> <s:key name="sharing">app</s:key> </s:dict> </s:key> <s:key name="hosts">localhost</s:key> <s:key name="index">default</s:key> <s:key name="logs"> <s:list> <s:item>Application</s:item> <s:item>ForwardedEvents</s:item> <s:item>Security</s:item> <s:item>Setup</s:item> <s:item>System</s:item> </s:list> </s:key> <s:key name="lookup_host">localhost</s:key> <s:key name="name">localhost</s:key> </s:dict> </content> </entry> </feed> 12:51:49 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=CPU%20Load&interval=10&object=Processor&counters=%25%20Processor%20Time%3B%25%20User%20Time&instances=_Total" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:51 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 199 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=CPU Load of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:51 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Available%20Memory&interval=10&object=Memory&counters=Available%20Bytes" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:52 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 207 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Available Memory of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:52 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Free%20Disk%20Space&interval=3600&object=LogicalDisk&instances=_Total&counters=Free%20Megabytes%3B%25%20Free%20Space" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:54 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 206 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Free Disk Space of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:54 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Network%20Interface&interval=10&object=Network%20Interface&counters=Bytes%20Received%2Fsec%3BBytes%20Sent%2Fsec&instances=*" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:56 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 208 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Network Interface of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:56 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/admin/deploymentclient/deployment-client targetUri=REDACTED:8089 >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 200 OK Date: Thu, 08 Sep 2022 16:51:56 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 1832 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable
I am creating a two column, column chart comparing how many necklaces we made (column 1) vs. how many we need (column 2). The chart is split up by hour starting from @d-22h to now(). Yet, if no neckl... See more...
I am creating a two column, column chart comparing how many necklaces we made (column 1) vs. how many we need (column 2). The chart is split up by hour starting from @d-22h to now(). Yet, if no necklaces are created during a hour, the columns will not be produced and will leave a blank space. If no events in a hour: Necklaces made=constant value of necklace made during last event hour (constant) Goal= hour*60 (increase by 60 every hour) current chart: What I want: Purple rectangles=646 (constant) Orange rectangles= previous Goal value +60 (box1=540+60; box2=600+60) Code: ------------------------------------------------------------------------ |makeresults|eval early_relative = "@d-2h"|eval late = "@d+22h" |eval date_hour=strftime(now(),"%H") |eval timeofday=case((date_hour>=22 AND date_hour<=23),"@d+22h,now",(date_hour>=0 AND date_hour<22),"@d-2h,now") |eval split=split(timeofday,",") |eval early_relative=mvindex(split,0) |eval early_date=strftime(relative_time(now(),early_relative),"%m/%d/%y %H:%M:%S") |eval late = if(mvindex(split,1)="now",now(),relative_time(now(),mvindex(split,1))) |eval late_date = strftime(if(mvindex(split,1)="now",now(),relative_time(now(),mvindex(split,1))),"%m/%d/%y %H:%M:%S") |eval test = strftime(late,"%m/%d/%y %H:%M:%S") |map search="search index=..... earliest=\"$early_relative$\" latest=$late$ |eval hour=1|eval date_hour=strftime(now(),\"%H\") |eval timeofday=case((date_hour>=22 AND date_hour<=23),\"@d+22h,now\",(date_hour>=0 AND date_hour<22),\"@d-2h,now\") |eval late=$late_date$ |eval early=$early_date$ |bucket _time span=1h |eval Time=strftime(_time,\"%H\") |eval Goal_hour=case((Time=22),1,(Time=23),2,(Time>=0 AND Time<22),Time+3) |eval Goal=Goal_hour*60 |stats count(Neckles) as Actual_Made by _time Goal |accum Actual_Made" ----------------------------------------------------------------------- Please help!!! Thank you.