All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm trying to get access to the free Splunk Cloud Trial. When I open the webpage it appears the message:  OOPS! PAGE NOT FOUND. Please try again later Is this a temporary problem or ... See more...
Hello, I'm trying to get access to the free Splunk Cloud Trial. When I open the webpage it appears the message:  OOPS! PAGE NOT FOUND. Please try again later Is this a temporary problem or does the Splunk Cloud Tiral no longer exist?   Greetings, Toni
( | stats count by app ) I have 30 apps to be displayed in a Piechart format. But in visualization i can view only 14 of them are showing the label names of the app. Why all the 30 apps not getting d... See more...
( | stats count by app ) I have 30 apps to be displayed in a Piechart format. But in visualization i can view only 14 of them are showing the label names of the app. Why all the 30 apps not getting displayed? Mostof them which have fewer count are showing the labels.
Is there a way to get the last time a host touched a file, within a certain period, e.g. earliest=-24h? We got a request to do a "spot check" of 20 - 30 servers from a list of 720 servers, which acc... See more...
Is there a way to get the last time a host touched a file, within a certain period, e.g. earliest=-24h? We got a request to do a "spot check" of 20 - 30 servers from a list of 720 servers, which according to someone's report run against splunk, have not written to /var/log/audit/audit.log in 24 hours. I think it's boring to manually ssh to servers and collect ls -l ouput, and so I thought it might be nice to ask splunk which servers it has entries for in the audit.log file over the last 24h, then compare that list with the provided list in order to check how good their report is. I have to use the GUI. First attempt (in fast mode). Tested with earliest=-1m index=X OR index=Y earliest=-24h source=/var/log/audit/audit.log | table host | dedup host As I watch the -24h query, I thought I'd ask if there are some more sane strategies to reduce the load. In *nix terms I would simply connect to a server and check the ctime of the file. The above query is just looking for all events in a 24h period, then depuping the list of servers. This seems a case of almost pure BFI. Good thing I am using index= and source=...
Hi all, i am new to splunk and face with a scenario.  We have install a forwarder on 1 of our production solaris device, intending to send logs to Splunk for monitoring purpose. On the Forwarder Man... See more...
Hi all, i am new to splunk and face with a scenario.  We have install a forwarder on 1 of our production solaris device, intending to send logs to Splunk for monitoring purpose. On the Forwarder Management, we can see that this solaris host is phoning home, which suggest that its connecting to the Splunk. App: Host is deployed to the Sol-prodweb app, which have an inputs.conf. This content inside list the path that suppose to be monitor, eg: [monitor:///var/adm/authlog] index=sol-prodweb sourcetype=linux_secure disable=0 We have also verify that the indexes page have the sol-prodweb detail and that the //var/adm/authlog path does have logs inside.  Server class: we also have create a sol-prodweb , with this device IP inside the whitelist .   Any important settings or steps that we miss out cause we are still unable to see any logs or data from this particular host at all?
eval FunctionalRef=spath(_raw,"n2:EvtMsg.Bd.BOEvt.Evt.DatElGrp{2}.DatEl.Val") -> I am getting two(2) values DHL5466256965140262WH3, DE4608089.   Instead I should get only DHL5466256965140262WH3.  So... See more...
eval FunctionalRef=spath(_raw,"n2:EvtMsg.Bd.BOEvt.Evt.DatElGrp{2}.DatEl.Val") -> I am getting two(2) values DHL5466256965140262WH3, DE4608089.   Instead I should get only DHL5466256965140262WH3.  So this value is not static XML Snippet: <DatElGrp Cd="CommonGrp"> <DatEl> <Cd>FunctionalRef</Cd> <Val>DHL5466256965140262WH3</Val> </DatEl> <DatEl> <Cd>DeclarantID</Cd> <Val>DE4608089</Val> </DatEl> </DatElGrp>
I have a lookup with server details and OS details(details are in the below table), and the index with CR no., Date, server and status.  so, with respective to the CR no. the total no. of server whic... See more...
I have a lookup with server details and OS details(details are in the below table), and the index with CR no., Date, server and status.  so, with respective to the CR no. the total no. of server which are patched is 4(irrespective of the status whether it is success or failure)  and the rest of the server in my lookup table is not patched with respect to that CR no. i want to write the query in order to get the count of the server which are not patched with respect to that CR no. that is the count is 11. Please Note: I need this query to show case the count of server that are not patched in the dashboard. lookup       Index       Server OS     CR No. Date Server Status 1 Unix     1 1-Jan 1 Success 2 Win     1 1-Jan 2 Success 3 Unix     1 1-Jan 3 Success 4 Win     1 1-Jan 4 fail 5 Unix     2 25-Dec 5 Success 6 Win     2 25-Dec 6 fail 7 Unix     2 25-Dec 7 fail 8 Win     3 1-Nov 8 Success 9 Unix     3 1-Nov 9 Success 10 Win             11 Unix             12 Win             13 Unix             14 Win             15 Unix            
Hi Everyone, I have one requirement. I have one field TimeTaken that I have calculated from BuildDuration/1000 from the logs. My requirement is suppose on JAN 7th the Time taken for the build is l... See more...
Hi Everyone, I have one requirement. I have one field TimeTaken that I have calculated from BuildDuration/1000 from the logs. My requirement is suppose on JAN 7th the Time taken for the build is like this: BuildStartDate                                  TimeTaken Thu Jan 7 2021                                  00:08:20 Thu Jan 7 2021                                  00:05:13 so I want average of time Taken for JAN 7 similarly for JAN 8  BuildStartDate                               TimeTaken Fri Jan 8 2021                                 00:11:50 Fri Jan 8 2021                                 00:16:10 Fri Jan 8 2021                                 00:10:59 so I want the Average of TimeTaken for JAN 8th Similarly if I select last 7  days I want Average TimeTaken for each day. Can someone guide me how I can write query for Avergae TimeTaken. Currently my query is: index="abc" sourcetype="xyz" BuildName!="g*" (BuildResult ="*")|eval TimeTaken=round('BuildDuration'/1000) | fieldformat TimeTaken = tostring(TimeTaken, "duration")|rex mode=sed field=BuildStartDate "s/\d{2}:\d{2}:\d{2}\s[A-Z]{3}\s//g"| table ORG BuildResult BuildStartDate TimeTaken| where ORG="gc" Thanks in advance
Is this Add-on : Splunk Add-on for McAfee Web Gateway supports McAfee Cloud Proxy ? https://splunkbase.splunk.com/app/3009/
I have a field named "test" which has the following json. If I do: | fields test{}.data{}{}.metric, test{}.data{}{}.value, | table test{}.data{}{}.metric perfdata{}.data{}{}.value I get (firs... See more...
I have a field named "test" which has the following json. If I do: | fields test{}.data{}{}.metric, test{}.data{}{}.value, | table test{}.data{}{}.metric perfdata{}.data{}{}.value I get (first item) a, 20 and b, 10  etc. in my table How do I search to get (second item) p, 50 and q, 60 etc. in my table? Trying test{}[1] did not work for me.  Thanks.       [ { "data": [ [ { "metric": "a", "variables": { "Task": "x" }, "value": 20 }, { "metric": "b", "variables": { "Task": "y" }, "value": 10 }, { "metric": "c", "variables": { "Task": "z" }, "value": 745 } ] ] }, { "data": [ [ { "metric": "p", "variables": { "Task": "e" }, "value": 50 }, { "metric": "q", "variables": { "Task": "f" }, "value": 60 }, { "metric": "r", "variables": { "Task": "g" }, "value": 70 } ] ] } ]        
hi i saw that you had this issue years ago: I've installed Splunk Security Essentials App and Splunk TA for Windows. However, when I run the Data Source Check I get a notice that the src field must ... See more...
hi i saw that you had this issue years ago: I've installed Splunk Security Essentials App and Splunk TA for Windows. However, when I run the Data Source Check I get a notice that the src field must be defined in the Security logs. It says the TA for Windows should provide the field definition. I think this needs to be included in the inputs.conf file for the TA for Windows app. Any ideas how to resolve the message and/or what to add to the inputs.conf file?   i was wondering if you found any solution for that. @sbgoldberg13 
Does anyone use standard node libraries in their splunk apps, for example 'util'? I'd like to use some of that functionality in an app, but am a little confused how to import that library within the ... See more...
Does anyone use standard node libraries in their splunk apps, for example 'util'? I'd like to use some of that functionality in an app, but am a little confused how to import that library within the context of Splunk and requirejs.
Good afternoon Is there documentation of the splunk recommendations with blob storage? , Recommended bucket stages (hot, warn, cold, frozen)?   Any information is appreciated.
Hi Everyone, I have one field called  BuildStartDate. Its showing Dates like below: Mon Jan 11 09:00:13 MST 2021 Sun Jan 10 09:00:01 MST 2021   I want only  to Display BuildStartDate in followin... See more...
Hi Everyone, I have one field called  BuildStartDate. Its showing Dates like below: Mon Jan 11 09:00:13 MST 2021 Sun Jan 10 09:00:01 MST 2021   I want only  to Display BuildStartDate in following Format Mon Jan 11 2021 Sun Jan 10 2021 Can someone guide me how can I do that. Thanks in advance
Hi All, We are planning for Splunk upgrade from version 7.1.4 to 7.2.10 . We have multisite cluster environment :  Below are the components :  1. 3 indexers - Site 1 (Clustered with Site 2 indexer... See more...
Hi All, We are planning for Splunk upgrade from version 7.1.4 to 7.2.10 . We have multisite cluster environment :  Below are the components :  1. 3 indexers - Site 1 (Clustered with Site 2 indexers) 2. 1 LM - Site 1 3. 1 CM - Site 1  4. 2 SH's - Site 1 (Clustered with site 2 SH) 5. 1 deployer - Site 1 6. 1 SH (Standalone) - Site 1  7. 3 Indexers - Site 2  8. 1 SH (Clustered with Site 1 SH) - Site 2  9. 1 SH (Standalone) - Site 2  We are planning to use the below sequence for the upgrade : Upgrade the LM (Site 1) Upgrade the CM  Upgrade the Search Heads ( Cluster) Upgrade the Deployer Upgrade the standalone SH's Put the CM in maintenance mode  Upgrade the Indexers ( Peers)  ( Site 1 then Site 2) Disable the maintenance mode  Could you please advise if the sequence is correct or is there any other sequence that needs to be followed for upgrade.     
Our Endpoint protection is blocking multiple powershell scripts that seem related to Splunk. Can anyone explain what these scripts do? nt6-siteinfo.ps1 nt6-health.ps1 nt6-repl-stat.ps1   Thanks!
Hello,   Trying to renames fields for CIM compliance, and I see this pop up when trying to rename via deliminer.  Any field I try I get this warning.  Although it lets me save? Thanks
Hello,  I am running a Splunk Server on a windows VM. A few weeks ago Splunk was ungracefully shut off (Windows Server was rebooted, while  Splunk was running). I was able to get Splunk up and runni... See more...
Hello,  I am running a Splunk Server on a windows VM. A few weeks ago Splunk was ungracefully shut off (Windows Server was rebooted, while  Splunk was running). I was able to get Splunk up and running but, only on my local instance. Now other computers on my network cannot access Splunk.   I have checked the fire wall settings and everything is good   startwebserver = 1 After restarting Splunk It assures me that Port 8000 is open In conclusion, I am able to access Splunk within that one computer. But after Splunk was shut down ungracefully, other computers on my network can no longer access Splunk. Any help would be much appreciated.  Thank you Marco
Hello, I'm looking to get the triggered alert results with alert name and triggered time in one table. Being very simple Column 1 triggered alert name Column 2 triggered time Column 3 Results of ... See more...
Hello, I'm looking to get the triggered alert results with alert name and triggered time in one table. Being very simple Column 1 triggered alert name Column 2 triggered time Column 3 Results of the triggered alert Could anyone help me with this Thanks in Advance
TLDR:  Goal is to perform an initial search which returns table of time user authenticated, then for each row in the table performs a subsequent search to find each time they established a connection... See more...
TLDR:  Goal is to perform an initial search which returns table of time user authenticated, then for each row in the table performs a subsequent search to find each time they established a connection to server.  The Authentication data and Network data are 100% separate.    My initial search is index=authentication objectId="thingIcareabout"  | eval earliest1=timestamp/1000 | eval earliestPlus10m=earliest1+600 | table username, earliest1, earliestPlus10m This successfully runs and returns: username earliest1 earliestPlus10m Joe 1610632992 1610630191 Bob 1610629591 1610633592   Reason why I add earliestPlus10m is so I can run a subsequent search against the network index and limit the amount of results to parse.  If I try the map command index=authentication objectId="thingIcareabout"  | eval earliest1=timestamp/1000 | eval earliestPlus10m=earliest1+600 | table username, earliest1, earliestPlus10m | map search="index=network connected $username$ earliest=$earliest1$ latest=$earliestPlus10m$ | stats earliest(_time)"  I get my 2 events, but no results in Statistics from map. I run job inspector  say the map returns no results.  I literally copy the query from inspector and run it in a new search and it does return exactly what I want.  For instance index=network connected Joe earliest=1610632992 latest=1610632992 | stats earliest(_time) does return correctly.  Confused here what I may be doing wrong...   My ultimate goal is userName earliest1 subsearch(time) calculated field (subsearchtime-earliest10 Joe 1610632992 1610633001 9 Bob 1610629591 1610629598 7
Hi, I am trying to break the events based on the timestamp.  File contains multiple time formats.  sample Time stamps:  01 January 2021 10:21:66 2021年01月01日 金曜日 10:07:54 AM 2021年01月01日 ... See more...
Hi, I am trying to break the events based on the timestamp.  File contains multiple time formats.  sample Time stamps:  01 January 2021 10:21:66 2021年01月01日 金曜日 10:07:54 AM 2021年01月01日 金曜日 14:54:03 2021年01月01日 12:55:54 PM 2021年01月01日 13:54:54 2021年1月1日 20:59:04 2021年1月1日 9:23:32 AM 金曜日, 3 1月 2021 11:49:45 AM Monday 3 January 2021 14:01:40 Monday, 3 January 2021 11:05:11 AM Monday, January 3, 2021 10:04:44 AM Thu Jan 7 22:33:44 EST 2021 Sample events: 07 January 2021 18:21:56 Employee1 Project Project Name Project Owner ------ ---------- ----------- A Y: \\Owner1\owner2 B Z: \\owner_1\owner 2 C g: \\owner11\owner12\owner 13 Friday, January 8, 2021 10:04:44 AM Employee2 Project Project Name Project Owner ------ ---------- ----------- A Y: \\Owner1\owner2 B Z: \\owner_1\owner 2 C g: \\owner11\owner12\owner 13 2021年01月08日 金曜日 10:07:54 AM   Employee3 Project Project Name Project Owner ------ ---------- ----------- A Y: \\Owner1\owner2 B Z: \\owner_1\owner 2 C g: \\owner11\owner12\owner 13 I tried with datetime.xml but it didn't work.  Expected output: Break events before timestamp and show results in the below tabular format. Employee1 A Y: \\Owner1\owner2 Employee1 B Z: \\owner_1\owner 2 Employee1 C g: \\owner11\owner12\owner 13