All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Everytime when we reboot the server, splunkd service is not starting automatically eventhough statup type is Automatic. OS version is Windows server 2019 UF agent version is 8.2.5    
All,  I have a simple bash script which pulls down some data from git and restarts a service. I'd like to give a button to my SOC on a Splunk dashboard that they can just click. I can put the script... See more...
All,  I have a simple bash script which pulls down some data from git and restarts a service. I'd like to give a button to my SOC on a Splunk dashboard that they can just click. I can put the script anywhere really. I remember seeing that if I should be able to execute script from splunk/bin from dashboards but I having a hard time finding an example.  Someone have an example/recipe they can point me to?   
The ability to have a *nix UF run under a non-root user but still be able to have it read files was introduced with v9.0.0 of the UF (https://docs.splunk.com/Documentation/Forwarder/9.0.0/Forwarder/I... See more...
The ability to have a *nix UF run under a non-root user but still be able to have it read files was introduced with v9.0.0 of the UF (https://docs.splunk.com/Documentation/Forwarder/9.0.0/Forwarder/Installleastprivileged) Is there a way that I, as a Splunk admin, could see which (if any) POSIX capabilities (CAP_DAC_READ_SEARCH - and potentially also CAP_NET_ADMIN and CAP_NET_RAW) the various forwarders are running under/with?  I've had a look at index=_internal to see if the UF generates anything during start-up but I haven't found anything.  
Indexer rebooted no-gracefully. After reboot Splunk starts generating crash files shortly after restart. Spent the last two days running fsck repair on all buckets. doesn't seem to have helped. N... See more...
Indexer rebooted no-gracefully. After reboot Splunk starts generating crash files shortly after restart. Spent the last two days running fsck repair on all buckets. doesn't seem to have helped. No relevant errors in the splunkd.log. Crash log files: crash-2022-09-07-14:45:07.log crash-2022-09-07-14:45:15.log crash-2022-09-07-14:45:24.log crash-2022-09-07-14:45:32.log crash-2022-09-07-14:45:40.log Crash log: every crash log has the same patterns as below by changing Crashing threads, such as IndexerTPoolWorker-2, IndexerTPoolWorker-4, IndexerTPoolWorker-7 and the like.. [build 87344edfcdb4] 2022-09-07 13:31:01 Received fatal signal 6 (Aborted) on PID 193171. Cause: Signal sent by PID 193171 running under UID 53292. Crashing thread: IndexerTPoolWorker-2 Backtrace (PIC build): [0x00007EFDA0A27387] gsignal + 55 (libc.so.6 + 0x36387) [0x00007EFDA0A28A78] abort + 328 (libc.so.6 + 0x37A78) [0x00007EFDA0A201A6] ? (libc.so.6 + 0x2F1A6) [0x00007EFDA0A20252] ? (libc.so.6 + 0x2F252) [0x000056097778BA2C] ReadableJournalSliceDirectory::findEventTimeRange(int*, int*, bool) ... Libc abort message: splunkd: /opt/splunk/src/pipeline/indexer/JournalSlice.cpp:1780: bool ReadableJournalSliceDirectory::findEventTimeRange(st_time_t*, st_time_t*, bool): Assertion `tell() == pos' failed.
Hi All, I have a lookup table table1.csv with following fields: - index sourcetype host last_seen I have a custom index: idx1 which has following fields: - orig_index orig_sourcetype orig_... See more...
Hi All, I have a lookup table table1.csv with following fields: - index sourcetype host last_seen I have a custom index: idx1 which has following fields: - orig_index orig_sourcetype orig_host I need to search each host value from lookup table in the custom index and fetch the max(_time) and then store that value against the same host in last_seen.  I tried the below SPL to build the SPL, but it is not fetching any results: - |inputlookup table1.csv |eval index=lower(index) |eval host=lower(host) |eval sourcetype=lower(sourcetype) |table index, host, sourcetype |rename index AS orig_index, host AS orig_host, sourcetype AS orig_sourcetype |format |eval searchq=search |eval searchq="index=\"idx1\"".searchq."|stats max(_time) AS last_seen BY orig_index, orig_sourcetype, orig_host" |search searchq However, when I used  |fields searchq It gave a proper SPL as the result: - index="idx1" (orig_host="1.1.1.1" AND orig_index="xxx" AND orig_sourcetype="sourcetype1") OR (orig_host="1.1.1.2" AND orig_index="xxx" AND orig_sourcetype="sourcetype2"))|stats max(_time) AS last_time BY orig_index, orig_sourcetype, orig_host And when I run the above resulting SPL as separate search, I get the proper results.  Thus, please share if there is a way to correct the above approach or if some different approach can help to build the solution. Thank you
I have a number of hosts sending logs "in the future". I've configured my indexer's props.conf to adjust the TZ for the select few problem children and restarted the indexer. How can I immediately... See more...
I have a number of hosts sending logs "in the future". I've configured my indexer's props.conf to adjust the TZ for the select few problem children and restarted the indexer. How can I immediately verify my changes have put these host's new events in the correct TZ (meaning, no longer in the future) Basically the existing "future events" are making the timeline noisy and I can't see where (or perhaps *when*) new events are coming in. I could wait several hours for them to clear out, but that's not ideal.
We are monitoring log files that rotate multiple times daily.  We have wildcards specified in the monitor command, but do not in the source command.  Our issue arises when the software that generates... See more...
We are monitoring log files that rotate multiple times daily.  We have wildcards specified in the monitor command, but do not in the source command.  Our issue arises when the software that generates the logs is updated and we have to update the paths in our inputs.conf. Here is a sample stanza: [monitor:///apps/psprdcs/pt8.58/config/sprdcs/appserv/SPRDCS/LOGS/APPSRV_*] index=peoplesoft sourcetype=peoplesoft_appsrv source=/apps/psprdcs/pt8.58/config/sprdcs/appserv/SPRDCS/LOGS/APPSRV.LOG What we would like to do is something along these lines so version changes don't require Splunk changes: [monitor:///apps/psprd*/.../APPSRV_*] index=peoplesoft sourcetype=peoplesoft_appsrv source=/apps/psprd*/.../APPSRV.LOG   I have notes from my predecessor that we can't do that because you cannot specify a wildcard on the source line.  We want the full pathname for the source. Is this limitation still in effect or can it be done? Thanks In Advance! Jeff
Need to extract P302 P1 P2 with a single regular ex I build (?<Par>P[1-9][0-9]*) but when I run this in splunk it only captures first (P302)  [SearchBroker - XXX] - [submitSearch] INFO: XXX [] - su... See more...
Need to extract P302 P1 P2 with a single regular ex I build (?<Par>P[1-9][0-9]*) but when I run this in splunk it only captures first (P302)  [SearchBroker - XXX] - [submitSearch] INFO: XXX [] - submitSearch time=36 pTime={P302=11,P1=7,P301=13,P2=24,P3=23,P4=31,P5=25,P6=23,P300=13,P7=23,P8=24,P9=24,P10=21,P12=24,P11=23,P1000=1,P14=26,P13=24,P16=21,P15=20,P18=20,P17=23} pProcessTime={P302p=10,P1p=6,P301p=12,P2p=23,P3p=22,P4p=30,P5p=24,P6p=23,P300p=13,P7p=23,P8p=24,P9p=24,P10p=21,P12p=23,P11p=22,P1000p=0,P14p=26,P13p=23,P16p=20,P15p=20,P18p=20,P17p=23} pWaitTime=
I've installed the forwarded on several other domain controllers in our environment but these last 2 keep failing, throwing the all too enigmatic "setup ended prematurely" error.  "Like a F-18 bro!" ... See more...
I've installed the forwarded on several other domain controllers in our environment but these last 2 keep failing, throwing the all too enigmatic "setup ended prematurely" error.  "Like a F-18 bro!" They are Windows Server 2019 9.0 forwarder 64-bit installer Regardless that the log states "SplunkForwarder already exists"; there is no current installation of the forwarder (but I have attempted it several times) The logs don't seem to have any intel I find useful, but maybe you all have a better secret decoder ring? msiexec.log: splunk.log: Other than a few of these types of details "input type=perfmon because it already exists" still unsure of the problem: 12:51:47 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/outputs/tcp/server "name=REDACTED:9997" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:47 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 170 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">REDACTED:9997 forwarded-server already present</msg> </messages> </response> 12:51:47 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost lookup_host=localhost^&logs=Application^&logs=Security^&logs=System^&logs=ForwardedEvents^&logs=Setup >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 200 OK Date: Thu, 08 Sep 2022 16:51:49 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 4477 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>win-event-log-collections</title> <id>/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections</id> <updated>2022-09-08T12:51:49-04:00</updated> <generator build="6818ac46f2ec" version="9.0.0"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_new" rel="create"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_reload" rel="_reload"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>localhost</title> <id>/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="list"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost/_reload" rel="_reload"/> <link href="/servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-event-log-collections/localhost" rel="edit"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">SplunkUniversalForwarder</s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> <s:item>power</s:item> <s:item>splunk-system-role</s:item> <s:item>user</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">1</s:key> <s:key name="sharing">app</s:key> </s:dict> </s:key> <s:key name="hosts">localhost</s:key> <s:key name="index">default</s:key> <s:key name="logs"> <s:list> <s:item>Application</s:item> <s:item>ForwardedEvents</s:item> <s:item>Security</s:item> <s:item>Setup</s:item> <s:item>System</s:item> </s:list> </s:key> <s:key name="lookup_host">localhost</s:key> <s:key name="name">localhost</s:key> </s:dict> </content> </entry> </feed> 12:51:49 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=CPU%20Load&interval=10&object=Processor&counters=%25%20Processor%20Time%3B%25%20User%20Time&instances=_Total" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:51 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 199 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=CPU Load of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:51 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Available%20Memory&interval=10&object=Memory&counters=Available%20Bytes" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:52 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 207 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Available Memory of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:52 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Free%20Disk%20Space&interval=3600&object=LogicalDisk&instances=_Total&counters=Free%20Megabytes%3B%25%20Free%20Space" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:54 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 206 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Free Disk Space of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:54 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/data/inputs/win-perfmon "name=Network%20Interface&interval=10&object=Network%20Interface&counters=Bytes%20Received%2Fsec%3BBytes%20Sent%2Fsec&instances=*" >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 400 Bad Request Date: Thu, 08 Sep 2022 16:51:56 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 208 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot create object id=Network Interface of input type=perfmon because it already exists.</msg> </messages> </response> 12:51:56 PM C:\Windows\system32\cmd.exe /c ""C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" cmd splunkd rest --noauth POST /servicesNS/nobody/SplunkUniversalForwarder/admin/deploymentclient/deployment-client targetUri=REDACTED:8089 >> "C:\Users\control\AppData\Local\Temp\splunk.log" 2>&1" HTTP/1.1 200 OK Date: Thu, 08 Sep 2022 16:51:56 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: text/xml; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 1832 Connection: Close X-Frame-Options: SAMEORIGIN Server: Splunkd <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable
I am creating a two column, column chart comparing how many necklaces we made (column 1) vs. how many we need (column 2). The chart is split up by hour starting from @d-22h to now(). Yet, if no neckl... See more...
I am creating a two column, column chart comparing how many necklaces we made (column 1) vs. how many we need (column 2). The chart is split up by hour starting from @d-22h to now(). Yet, if no necklaces are created during a hour, the columns will not be produced and will leave a blank space. If no events in a hour: Necklaces made=constant value of necklace made during last event hour (constant) Goal= hour*60 (increase by 60 every hour) current chart: What I want: Purple rectangles=646 (constant) Orange rectangles= previous Goal value +60 (box1=540+60; box2=600+60) Code: ------------------------------------------------------------------------ |makeresults|eval early_relative = "@d-2h"|eval late = "@d+22h" |eval date_hour=strftime(now(),"%H") |eval timeofday=case((date_hour>=22 AND date_hour<=23),"@d+22h,now",(date_hour>=0 AND date_hour<22),"@d-2h,now") |eval split=split(timeofday,",") |eval early_relative=mvindex(split,0) |eval early_date=strftime(relative_time(now(),early_relative),"%m/%d/%y %H:%M:%S") |eval late = if(mvindex(split,1)="now",now(),relative_time(now(),mvindex(split,1))) |eval late_date = strftime(if(mvindex(split,1)="now",now(),relative_time(now(),mvindex(split,1))),"%m/%d/%y %H:%M:%S") |eval test = strftime(late,"%m/%d/%y %H:%M:%S") |map search="search index=..... earliest=\"$early_relative$\" latest=$late$ |eval hour=1|eval date_hour=strftime(now(),\"%H\") |eval timeofday=case((date_hour>=22 AND date_hour<=23),\"@d+22h,now\",(date_hour>=0 AND date_hour<22),\"@d-2h,now\") |eval late=$late_date$ |eval early=$early_date$ |bucket _time span=1h |eval Time=strftime(_time,\"%H\") |eval Goal_hour=case((Time=22),1,(Time=23),2,(Time>=0 AND Time<22),Time+3) |eval Goal=Goal_hour*60 |stats count(Neckles) as Actual_Made by _time Goal |accum Actual_Made" ----------------------------------------------------------------------- Please help!!! Thank you.  
In HTML - how do I get the text to be on the right side of the button? (The white text) I have the following, but I am trying to get the text to be on the right of the button.     At the... See more...
In HTML - how do I get the text to be on the right side of the button? (The white text) I have the following, but I am trying to get the text to be on the right of the button.     At the moment I use label of a table to add my message, but ideally i would like it to be on the right side of the button           <html rejects="$Config_path_token$"> <style>.btn-primary { margin: 5px 10px 5px 0; }</style> <a class="btn btn-primarys">Press to Download Congifuration File</a> </html> <table id="tbl1"> <title>Configuration File http://dell967srv.scz.murex.com:15022/public/mxres/common/launchermxmarketdataevaluation.mxres</title>            
Hello, I'm a bit new to Splunk and I'm trying to run a query that shows me users in Active directory that are still enabled but haven't logged in for past 30 days.  I've tried searching through varie... See more...
Hello, I'm a bit new to Splunk and I'm trying to run a query that shows me users in Active directory that are still enabled but haven't logged in for past 30 days.  I've tried searching through varies post but none seem to be exactly what I'm looking for.  I may have overlooked it so If someone can point me in the right direction or provide a sample query to get me started I'd be very grateful. Thanks, Bob
My current search is: `index` | search source="Main Source" | fields identifier, status_label | chart count over identifier by status_label   My output statistics for this search looks like t... See more...
My current search is: `index` | search source="Main Source" | fields identifier, status_label | chart count over identifier by status_label   My output statistics for this search looks like this Identifier | F1 | F2 | F3 | F4 | F5 ID_1          | 6   | 4    | 3    | 2   |   0 ID_2          | 0   | 3    | 7    | 9   |   4   I need to combine F1, F3, and F4 as Total_1 and F2 + F5 as Total_2 for each identifier. I only want my table to show Identifier, Total_1, and Total_2 Is this possible?
Hi team, I'm using a SAAS controller. I have so many duplicates users on my account management module, And I would find a way to detect the last status of them and to detect their activity logs. ... See more...
Hi team, I'm using a SAAS controller. I have so many duplicates users on my account management module, And I would find a way to detect the last status of them and to detect their activity logs. I want to delete the duplicate users from my account management module. Is there any solutions for this I need this ASAP. Thanks you in advance and I am very grateful for your help. Comment title edited for clarity and searchability. Claudia Landivar, Community Manager
Hello Guys. I use splunk cloud to monitor logs from Windows, Firewall, Office 365, etc. I recently got a message that splunk's license has expired, since then splunk has stopped receiving firewall ... See more...
Hello Guys. I use splunk cloud to monitor logs from Windows, Firewall, Office 365, etc. I recently got a message that splunk's license has expired, since then splunk has stopped receiving firewall logs. I updated the license and renewed the credentials, but I still don't receive the logs. And some of the error messages appear: "The TCP output processor has paused the data flow." Could someone help me please?  
I need to create a Splunk alert that will trigger when storage on /vi/vip_pdh/00d for a host reaches at least 90% capacity. index=A sourcetype=B   /vi/vip_pdh OR /var/log  earliest=-2h | eval Use... See more...
I need to create a Splunk alert that will trigger when storage on /vi/vip_pdh/00d for a host reaches at least 90% capacity. index=A sourcetype=B   /vi/vip_pdh OR /var/log  earliest=-2h | eval UsePct=rtrim(UsePct,"%") | stats latest(UsePct) as UsePct by MountedOn host. Just a Slight correction, I want to monitor both /vi/vip_pdh and /var/log. Thanks!
I'm extremely new to Splunk and finding learning SPL very frustrating. I'm trying to look for windows log on events/ attempted log ons by leavers accounts after their last working day. How do i say... See more...
I'm extremely new to Splunk and finding learning SPL very frustrating. I'm trying to look for windows log on events/ attempted log ons by leavers accounts after their last working day. How do i say where a specific field (the last working day) is before todays date.  The last working day field which I'm pulling from a separate index is in the following format "2020-02-28 00:00:00.0"    
Hello ALL, My deployment is UF ---->HF(local copy)----->indexer I would like to send logs from HF to indexer except some sourcetype, at the same time need to keep a local event copy of all forward... See more...
Hello ALL, My deployment is UF ---->HF(local copy)----->indexer I would like to send logs from HF to indexer except some sourcetype, at the same time need to keep a local event copy of all forwarded logs from UF in HF. I have found a number of seemingly great answers and help pages for how to set this up with props.conf and transforms.conf but no luck. At what level do I need to change configuration HF or Indexer? please suggest how to achieve this. Thanks,
Hello colleagues When running the command (/opt/splunk/bin/splunk reload deploy-server -class Class_Name -debug) There was no such error before. Literally today got out with what it can be connect... See more...
Hello colleagues When running the command (/opt/splunk/bin/splunk reload deploy-server -class Class_Name -debug) There was no such error before. Literally today got out with what it can be connected? Will setenv SPLUNK_CLI_DEBUG to "v". In check_and_set_splunk_os_user(): In env found *no* SPLUNK_OS_USER var. WARNING (cli_common) btool returned something in stderr: 'Will exec (detach=no): USER=root USERNAME=root PATH=/opt/splunk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/splunk/bin PWD=/opt/splunk/etc/deployment-apps HOSTNAME=splunk-deployer SPLUNK_HOME=/opt/splunk SPLUNK_DB=/opt/splunkDBcold/DBdefault SPLUNK_SERVER_NAME=Splunkd SPLUNK_WEB_NAME=splunkweb PYTHONPATH=/opt/splunk/lib/python2.7/site-packages NODE_PATH=/opt/splunk/lib/node_modules LD_LIBRARY_PATH=/opt/splunk/lib LDAPCONF=/opt/splunk/etc/openldap/ldap.conf /opt/splunk/bin/splunkd btool web list
I have encountered an issue with the foreach command on mv-fields. When I execute my search, Splunk says: "Error in 'eval' command: The expression is malformed. An unexpected character is reached at... See more...
I have encountered an issue with the foreach command on mv-fields. When I execute my search, Splunk says: "Error in 'eval' command: The expression is malformed. An unexpected character is reached at '<<ITEM>>'. " SPL to reproduce:     | makeresults | eval mvfield=mvappend("1", "2", "3"), total=0 | foreach mode=multivalue mvfield [eval total = total + <<ITEM>>] | table mvfield, total     Note: this query is directly pulled from the examples for the foreach command. Note2: the argument "mode" is not syntax-highlighted (would expect green)