pkeller's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

pkeller's Topics

The instructions for configuring data inputs for the TA-Azure imply that there should be additional items under Settings -> Data Inputs. We're not seeing them. We've installed and enabled the TA but ... See more...
The instructions for configuring data inputs for the TA-Azure imply that there should be additional items under Settings -> Data Inputs. We're not seeing them. We've installed and enabled the TA but can't proceed with the highlighted step (below) from the documentation because neither "Azure Diagnostics" nor "Azure Website Diagnostics" input appears in standard Data Inputs panel. [ snipped from Azure setup document ] Setting up Splunk to read Azure diagnostic logs Within Splunk, click Settings -> Data inputs Click the "Azure Diagnostics" input or "Azure Website Diagnostics" input Click on the "New" button to create a new data input Give the input a unique name Supply the name of the Azure Storage account containing the log data Supply the Azure Storage account access key - refer to the section below for details on how to obtain your storage account access key Is there some other setup item that needs to be performed in order to complete the Data inputs portion for the Azure TA?
I've been asked to index both Operational.evtx and Analytic.etl from both \Winevt\Logs\Microsoft-Windows-WinRM and \Winevt\Logs\Microsoft-Windows-PowerShell from a few Windows hosts. I'm not quite... See more...
I've been asked to index both Operational.evtx and Analytic.etl from both \Winevt\Logs\Microsoft-Windows-WinRM and \Winevt\Logs\Microsoft-Windows-PowerShell from a few Windows hosts. I'm not quite sure how to configure the inputs.conf for this. I'm guessing that it's something like: [WinEventLog:PowerShell] disabled = 1 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false or [WinEventLog:WinRM] .. But again, not really clear. (and not at all Windows literate) And then how do you differentiate between the Operational and Analytic objects. Thank you
In our test environment, we successfully setup the Splunk Add-on for Amazon S3 and pulled buckets so that we could view the data and make sure the props.conf settings were sorted out before we moved ... See more...
In our test environment, we successfully setup the Splunk Add-on for Amazon S3 and pulled buckets so that we could view the data and make sure the props.conf settings were sorted out before we moved to production. When we setup the same configuration in Production, we're only pulling 'new' buckets. We'd like to ingest all the same buckets that were pulled into our test environment. Is there some setting in the Add-on (or on the S3 side) that keeps track of what has already been pulled, thus preventing a duplicate pull? Thanks very much,
Been trying to configure the getsnow app. A couple of questions. is it supported on a search head cluster? when i create a [production] stanza including 'user' and 'password' settings, is the ... See more...
Been trying to configure the getsnow app. A couple of questions. is it supported on a search head cluster? when i create a [production] stanza including 'user' and 'password' settings, is the clear-text password supposed to get encyrpted? getsnow.log gets created but there is never any data written. |getsnow table="" doesn't seem to return any data at all. the 'getsnow' search also returns a number of errors from our indexer pool, which seems odd. Is getsnow required to be installed on the indexing layer as well? Thank you
We find that in many cases the Forwarder Management Interface is very slow. Some folks prefer to handle modifying serverclass.conf manually, others prefer the UI. Does this present a problem, as long... See more...
We find that in many cases the Forwarder Management Interface is very slow. Some folks prefer to handle modifying serverclass.conf manually, others prefer the UI. Does this present a problem, as long as we don't step on each other's toes? Also, it looks like Forwarder Management uses apps/search/local/serverclass.conf while the people that have been weaned on manually managing the serverclass.conf file use etc/system/local/serverclass.conf ... Is there a way to force the Forwarder Management UI to use etc/system/local instead of using the one in the 'search' app?
I've constructed a lookup table containing some key data sources that I expect to see events from on a daily basis. The lookup table, expected_datasources.csv looks like this: sourcetype,source... See more...
I've constructed a lookup table containing some key data sources that I expect to see events from on a daily basis. The lookup table, expected_datasources.csv looks like this: sourcetype,source st1,source1 st2,source2a;source2b;source2c st3,/path/to/source3* The 'globbed' source is there because the source name will change every day as the filename contains date/time information. The search I use to confirm that I've received events within the last 24 hours: (index=firstIndex) OR (index=secondIndex) earliest=-24h [|inputlookup expected_datasources.csv | makemv delim=";" source | fields source,sourcetype ] | append [ inputlookup expected_datasources.csv | makemv delim=";" source | fields source,sourcetype ] | stats count by sourcetype source | eval count=count-1 | eval count=count-1 | where count<1 AND (!source LIKE "%*") The problem with this search, is that it works fine for all of the lookup rows EXCEPT the one where source=/path/to/source3* If the search returns events for a source named: /path/to/source3_Mar02-2016-03:00.csv then everything works correctly. The obvious flaw here is that if nothing gets returned matching the pattern, then I've already provided the condition to ignore that data source with my (!source LIKE "%*") ... I'm kind of stuck on the logic here of actually how to complete this thing. As always, thanks for any suggestions.
Recently started encountering issues where one node of a 4 node search head cluster starts reporting: SHPMaster - Search not executed: Your maximum number of concurrent searches has been reach... See more...
Recently started encountering issues where one node of a 4 node search head cluster starts reporting: SHPMaster - Search not executed: Your maximum number of concurrent searches has been reached. usage=93 quota=40 user=my.username The strange thing is that this is only happening on one node. The Activity->Jobs drop-down doesn't reveal anywhere close to the number of running jobs ... and subsequently, bouncing splunkd on the reporting member resolves the issue ... ( and it gradually reappears over the next 5-7 days ) This environment was recently upgraded to 6.3.2. I guess the question then would be, 'why is this happening only on a single node?'
I have two data sources, each with a field named foo. Each data source has a different sourcetype, so I'd like to do something like this: (sourcetype=STa) OR (sourcetype=STb) | ... if sour... See more...
I have two data sources, each with a field named foo. Each data source has a different sourcetype, so I'd like to do something like this: (sourcetype=STa) OR (sourcetype=STb) | ... if sourcetype is STa, I'd like to rename foo to fooA if sourcetype is STb, I'd like to rename foo to fooB Using eval fooA = coalesce(foo,NULL) works, but I'm not sure how to make it conditional so that it only acts on events with the STa sourcetype. Thank you
If I'm monitoring a very large logfile [monitor:///home/me/logs] whitelist = (myApp)\.log$ /home/me/logs/myApp.log And at some point, a process rotates the file to: /home/me/logs/OLD/my... See more...
If I'm monitoring a very large logfile [monitor:///home/me/logs] whitelist = (myApp)\.log$ /home/me/logs/myApp.log And at some point, a process rotates the file to: /home/me/logs/OLD/myApp.log If the file hasn't fully been forwarded at the time of rotation ... will: myApp.log be monitored in the new directory (assumed because OLD would be in scope for the monitored path) myApp.log be monitored in its entirety, or will Splunk still know the offset that was last indexed Thank you. If
I'm finding that timechart is returning null results if my number is less than 1. earliest=-3d latest=-1d sourcetype=foo | timechart span=1h avg(value) as myValue by host If the overall aver... See more...
I'm finding that timechart is returning null results if my number is less than 1. earliest=-3d latest=-1d sourcetype=foo | timechart span=1h avg(value) as myValue by host If the overall average of value is less than one ... ie: .2 or .7 etc ... I get a null result in myValue This works fine for numbers 1 or greater. I've tried using round , exact , and eval(avg(value)*10) ... I still get null results.
host,value,timestamp a1,30,24-Oct-15 00:00 a1,10,24-Oct-15 01:00 a1,5,24-Oct-15 02:00 a2,3,24-Oct-15 00:00 a2,5,24-Oct-15 01:00 I'm wondering if it's possible using either inputcsv or ... See more...
host,value,timestamp a1,30,24-Oct-15 00:00 a1,10,24-Oct-15 01:00 a1,5,24-Oct-15 02:00 a2,3,24-Oct-15 00:00 a2,5,24-Oct-15 01:00 I'm wondering if it's possible using either inputcsv or inputlookup (if the csv is a lookup table) to do something like: |inputcsv mycsv | search host=a1 | timechart span=1h avg(value) by host (obviously, that doesn't work ... and I'm thinking that the only way to do this is to index the CSV with the TIME_FORMAT defined based on the 'timestamp' field.) I probably shouldn't even be asking this to begin with
I recently had to pull a dashboard raw XML file off of an archive. What is the process for actually putting it back in? I copied the file to the original directory ... $SPLUNK_HOME/etc/users/usernam... See more...
I recently had to pull a dashboard raw XML file off of an archive. What is the process for actually putting it back in? I copied the file to the original directory ... $SPLUNK_HOME/etc/users/username/search/local/data/ui/views ... Then I executed a debug/refresh which did make the dashboard visible to the UI on the cluster member, but it didn't result in that dashboard being replicated on the other cluster members. Then I used the UI to edit the dashboard, made a trivial change, and at that point, it was pushed. I'm looking really for the best practice to actually restore something in an SHC ... be it a dashboard or a lookup table.
I've created a script that, when called from the search bar using: |script foo.py | outputtext it outputs a table containing one unnamed column containing the script output and an empty _raw... See more...
I've created a script that, when called from the search bar using: |script foo.py | outputtext it outputs a table containing one unnamed column containing the script output and an empty _raw column. If the output is "key=value", is there a downstream command I can use that will actually create that field and value?
Say I have a table ... host, IP, destinationHostname, Port, count host1 10.10.10.1 desthost1 9999, 33 host1 10.10.10.2 desthost2 9998, 33 host1 10.10.10.3 desthost3 9997, 34 host2 10.10.10.1 des... See more...
Say I have a table ... host, IP, destinationHostname, Port, count host1 10.10.10.1 desthost1 9999, 33 host1 10.10.10.2 desthost2 9998, 33 host1 10.10.10.3 desthost3 9997, 34 host2 10.10.10.1 desthost1 9999, 88 host3 10.10.10.2 desthost2 9995, 100 host3 10.10.10.4 desthost4 9990, 10 I'd like to remove the host field where it appears multiple times ... So, I want my output to look like this ... ( essentially ... where host1 repeats I just want it gone ) host1 10.10.10.1 desthost1 9999, 33 10.10.10.2 desthost2 9998, 33 10.10.10.3 desthost3 9997, 34 host2 10.10.10.1 desthost1 9999, 88 host3 10.10.10.2 desthost2 9995, 100 10.10.10.4 desthost4 9990, 10 Is that even possible? The last element of my search is simply |table host, IP, destinationHostname, Port, count
I installed the Splunk App for Unix and Linux 5.0.2 on my search head cluster. Installed the SA-nix app on the search heads and indexers, and deployed the Splunk Add-on for Unix and Linux everywhere.... See more...
I installed the Splunk App for Unix and Linux 5.0.2 on my search head cluster. Installed the SA-nix app on the search heads and indexers, and deployed the Splunk Add-on for Unix and Linux everywhere. Now when I try to go into setup to add categories/groups, I create a category, then add a group to it ... and immediately the app starts spinning on 'loading' (in the "Hosts not in" area). It never ends. I assumed that was related to the dynamically created dropdowns.csv but I'm not positive. On the indexers, dropdowns.csv does get created when Splunk is restarted, but it doesn't really represent every host that would be sending data to my indexing pool. On the search heads, I tried manually generating my own dropdowns.csv, that prepopulated categories and groups with hosts. But ultimately, the search heads started complaining like mad about stuff not being on the indexers. I have since removed SA-nix from everywhere. My question would be .. does this sound familiar, or is there more detailed documentation about these lookups that the app is dependent on and how to ensure that they're all available?
inputs.conf [monitor:///home/foo/logs/*/app] whitelist = \.gmt.log$ blacklist = monitor disabled = false Underneath /home/foo/logs/base/app there are files named foo.YYYYMMDD.gmt.log There... See more...
inputs.conf [monitor:///home/foo/logs/*/app] whitelist = \.gmt.log$ blacklist = monitor disabled = false Underneath /home/foo/logs/base/app there are files named foo.YYYYMMDD.gmt.log There's also a subdirectory named 'monitor' The app rotates files from the 'app' directory to the 'monitor' directory But, when I run 'splunk list monitor I see /home/foo/logs/base/app /home/foo/logs/base/app/foo.20150716.gmt.log /home/foo/logs/base/app/foo.20150718.gmt.log /home/foo/logs/base/app/monitor /home/foo/logs/base/app/monitor/foo.20150710.gmt.log /home/foo/logs/base/app/monitor/foo.20150711.gmt.log Shouldn't those bottom 3 lines be omitted from the output?
We had a recent instance where logfiles that were starting to be loaded up with null characters and growing exponentially larger wound up requiring a Splunk restart in order to release them. The U... See more...
We had a recent instance where logfiles that were starting to be loaded up with null characters and growing exponentially larger wound up requiring a Splunk restart in order to release them. The UF was obviously not forwarding them due to the binary nature of the file, but the logs show that the UF continued trying. An SA removed the file, but the UF would not let it go. Is this expected behavior? I've not encountered this in the past, where log rotation or removal resulted in the process keeping the file held. splunkd 13880 splunkuser 38r REG 9,2 843716702208 51601434 /home/blah/logs/blah/sfdc/blah.blah-6-3-blah.app1.20150604.NNN.gmt.log (deleted) The splunkd logs only say: TailingProcessor - File will not be read, is too small to match seekptr checksum ... So, if the file won't be read, any ideas as to why the UF wouldn't release it?
I think some of my forwarders may be experiencing cases where logs are being removed before all events have been forwarded. Is there a string to look for in splunkd.log or possibly recommendations fo... See more...
I think some of my forwarders may be experiencing cases where logs are being removed before all events have been forwarded. Is there a string to look for in splunkd.log or possibly recommendations for increased logging levels to detect when splunkd encounters a situation where a file it has been monitoring no longer exists?
One of my users is having an issue with timechart ... (host=aaa6* OR host=bbb24*) "[string to filter search]" (E=005 OR E=00D OR E=0Bb OR E=0Bz) | timechart span=12h count by E The output of th... See more...
One of my users is having an issue with timechart ... (host=aaa6* OR host=bbb24*) "[string to filter search]" (E=005 OR E=00D OR E=0Bb OR E=0Bz) | timechart span=12h count by E The output of this search gives me a table with the following five field headers AND the _time field is correctly broken down into 12 hour increments _time, 005, 00D, 0Bb, 0Bz (host=aaa6* OR host=bbb24*) "[string to filter search]" (E=005 OR E=00D OR E=0Bb OR E=0Bz) | timechart count by E span=12h This search gives me the following five field headers AND the _time field is broken down into 30 minute increments _time, 0, 00D, 0Bb, 0Bz The timechart documentation does not appear to suggest that the placement of 'span' is required directly after 'timechart', but it appears to be so. Is this worth filing a bug? Paul
I have some sourcetypes that I'd like to go to two indexing destinations. Universal Forwarder: (inputs.conf) [monitor:///path/to/logs] index=myindex sourcetype=mysourcetype _TCP_ROUTING=oregon... See more...
I have some sourcetypes that I'd like to go to two indexing destinations. Universal Forwarder: (inputs.conf) [monitor:///path/to/logs] index=myindex sourcetype=mysourcetype _TCP_ROUTING=oregon,sanDiego Intermediate forwarder: (outputs.conf) [tcpout] defaultGroup=oregon tcpout:oregon] autoLB = true autoLBFrequency = 30 server = portland1:7777,portland2:7777,portland3:7777 [tcpout:sanDiego] autoLB = true autoLBFrequency = 30 server = sd1:7777,sd2:7777,sd3:7777 The logs are going only to the default group (oregon), so I'm wondering now if I need to add forwarding instances on my intermediate forwarders ... so that one forwarding instance routes to oregon and the other to san diego ... thus I wouldn't have any tcp routing statements on the intermediate forwarder ... I'm hoping that I'm making sense here ... ie: (inputs.conf) [monitor:///path/to/logs] index=myindex sourcetype=mysourcetype _TCP_ROUTING=oregon_forwarders,sandiego_forwarders then: (also on the UF) - outputs.conf [tcpout] defaultGroup=oregon_forwarders tcpout:oregon_forwarders] autoLB = true autoLBFrequency = 30 server = forwarder1:7777,forwarder2:7777 [tcpout:sandiego_forwarders] autoLB = true autoLBFrequency = 30 server = forwarder1:7778,forwarder2:7778