Activity Feed
- Karma Re: REST API - How to? for ziegfried. 06-05-2020 12:46 AM
- Karma Re: REST API Logs for Drainy. 06-05-2020 12:46 AM
- Karma Re: Sideview Utils Multi Select Drop Down for hexx. 06-05-2020 12:46 AM
- Karma Re: Changing the search used, based upon the time range selected for dwaddle. 06-05-2020 12:46 AM
- Got Karma for Changing the search used, based upon the time range selected. 06-05-2020 12:46 AM
- Got Karma for Deployment Monitor Missing Forwarders. 06-05-2020 12:46 AM
- Got Karma for Deployment Monitor Missing Forwarders. 06-05-2020 12:46 AM
- Got Karma for Deployment Monitor Missing Forwarders. 06-05-2020 12:46 AM
- Got Karma for Re: EASY QUESTION: How to search for events that produce a field value of zero. 06-05-2020 12:46 AM
- Got Karma for Re: A 90day Accelerated Report only shows 10 days of data. 06-05-2020 12:46 AM
- Got Karma for Re: Index query question (latest event from each source type by host). 06-05-2020 12:46 AM
- Got Karma for Re: Index query question (latest event from each source type by host). 06-05-2020 12:46 AM
- Got Karma for Re: Index query question (latest event from each source type by host). 06-05-2020 12:46 AM
- Got Karma for Re: Index query question (latest event from each source type by host). 06-05-2020 12:46 AM
- Got Karma for Re: Changing the search used, based upon the time range selected. 06-05-2020 12:46 AM
- Got Karma for Re: Report acceleration using earliest and latest not possible?. 06-05-2020 12:46 AM
- Got Karma for Re: Graphs for field values. 06-05-2020 12:46 AM
- Got Karma for Re: Using avg func with stats to find the average of a result for a week. 06-05-2020 12:46 AM
- Got Karma for Re: Failure to return values in a database lookup using Splunk DB Connect. 06-05-2020 12:46 AM
- Got Karma for FS Change keeps adding and deleting files from monitoring. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
3 | |||
2 | |||
1 | |||
0 |
05-29-2014
01:33 PM
I see what you mean. But as a basic rule, is it fair to say that any user which is "allowed" access to the PCI network, should be listed as such in the identities.csv? Any user which is found in the events that is NOT in this identities.csv file will be treated accordingly dependending on the DSS requirement and PCI report that the unidentified user was found in. Is that right?
... View more
05-28-2014
10:27 AM
In relation to Splunk for PCI Compliance, what happens when Splunk finds a user in the events which is not listed in identities.csv? Is this user auto-categorized as "unknown user" or something similar?
... View more
04-25-2014
02:39 PM
There are many things depending on this app working correctly, mainly the aws.conf file.
However if everything looks right, but you are still getting this error, then perform the following:
1 - There may be a typo in the script - search the script for "spleep" and change to read "sleep"
2 - Update the "boto" python libraries in the SplunkAppforAWS using the following commands (linux):
sudo yum install python-pip
sudo python-pip install boto --upgrade
3 - Search the get_instances.py script for the line that reads " running_all = ec2_conn.get_all_instances() "
4 - Right above this line, enter the following. " running_all = [] "
Now your script should read as follows:
running_all = []
running_all = ec2_conn.get_all_instances
After all of this, restart splunk and you should be good.
... View more
04-25-2014
02:27 PM
Using the Splunk app for AWS we cannot get the get_instances.py script to run without throwing an error.
We keep getting the error saying "running_all is not defined"... How do we get this script to run successfully?
... View more
02-25-2014
03:03 PM
Did anyone come up with a way to do this?
... View more
09-10-2013
05:44 PM
1 Karma
Have you seen any performance hits having it running for a few months now? How big were/are your TSIDX namespaces?
... View more
08-27-2013
01:02 PM
I have also seen this problem. Checked permissions galore and nothing seems to work.
Does anyone have any answers to this?
... View more
05-07-2013
01:30 PM
The whitelist that you have specified is not escaping the "." in the filename.
I think your whitelist should have back slashes before the .log
EG:
... View more
04-09-2013
01:02 PM
1 Karma
Assuming that in this case, the xmlkv command is splitting the KVs correctly, you could do this:
sourcetype="ScreenSharingEvent" | xmlkv | search SCHEDULED=0 | chart count by ns2:accountId ns2:sessionType
... View more
04-09-2013
01:00 PM
1 Karma
I think the default retention period for the internal index is 28 days, so without changing that you will not be able to see 90 days of data. I am not sure why you are only seeing 10 days of data - Was this setting lowered by any chance? Do you have access to the CLI? If so, you can run the following command and from the output, check the "frozenTimePeriodInSecs" setting for the [_internal] stanza to see how long you are keeping internal data. (Or you can also check the indexes page in the manager to see what the "earliest event" you have in that index is, to see if there is indeed any data past 10 days ago)
(assuming Splunk is installed in /opt/splunk...)
Command: /opt/splunk/bin/splunk cmd btool indexes list --debug
Also, remember that the report accelerated data will not live longer than the original rawdata, regardless of the report accelerated window setting.
... View more
03-19-2013
10:48 AM
I am getting the same problem with my instance. DBConnect Version is 1.0.8 and Splunk version is 5.0.1.
... View more
03-19-2013
08:55 AM
1 Karma
I have the same problem here... there are no errors in that "Recent DB Connect Errors" search and I still get an error code of 1 saying "results may be incomplete". I can successfully do a SQL query using dbquery from the table in question, however when I try run my lookup it does not work.
My database lookup seems to be set up correctly.
Is there another bug here with this?
... View more
03-19-2013
06:30 AM
1 Karma
I want to lookup data from my database and bring it into Splunk to add more information to my log events. However I do not want my seaches querying the database every time we run a search as it may be large load on the database. Is there any way that we can build an internal lookup table in Splunk by looking up the data in the database on a periodic basis and then using this lookup table for my searches?
This eliminates the issue of querying the database for every search we run.
Thanks!
... View more
03-13-2013
07:29 AM
When you add a new "index", (and the repFactor attribute is set to "auto" in indexes.conf) then all data that enters that index will be replicated. If you add a new "indexER" (Note difference between "index" and "indexer") then all data in that indexer will be replicated, if you have indeed set it up as an indexer in the cluster pool. Replication happens all the time for every 64bit chunks of data (as far as I know). Hope that helps.
... View more
03-12-2013
08:34 AM
Also, remember that if you are setting up a second indexer to engage in replication with an existing indexer, then the existing data will NOT be replicated. You will only replicate data that was indexed AFTER you enabled replication.
... View more
03-12-2013
08:22 AM
3 Karma
Hi Olivier,
The problem here is that a search will only use a Report Acceleration (RA) summary, if the hash of the accelerated part of the new search is exactly the same as the original acceleration search.
EG:
index=main sourcetype=my_sourcetype | timechart count by host
The entirety of this search above is accelerated and a summary is created from its results.
If we create another search as follows, it WILL use the RA summary generated from the above search because the hash of the RA part of the search is the same as the RA search above:
index=main sourcetype=my_sourcetype | timechart count by host | search host=my_host
Although this search is different to the one that was report accelerated, it will STILL use the summary from the first search because the "RA part" of the search is the EXACT same and thus the hash is the same.
Now if we move back to your example... you have included "earliest" and "latest" in the middle of the "RA part" of the search. This changes the hash and thus it will not use the summary from the original RA search.
Therefor the following search would NOT use the RA summary from my example because we have changed the hash of the RA part of the search and it does not match the original RA search:
index=main earliest=-2d@d latest=now sourcetype=my_sourcetype | timechart count by host | search host=my_host
Does that make sense.
As a rule of thumb, if you want to use a RA summary from another search, you must ensure that the NEW search has the exact same hash as the original RA search, and then you can pipe on more commands. But make sure that the FIRST part of the new search matches the RA search.
Hope that helps.
John
... View more
02-20-2013
02:36 PM
Hi tiberious726,
So you are saying that the SEARCH is calculating the "accumulated total bytes", or that the straight TX value in the events is the "accumulated total bytes" (so that is why we are finding the difference between TXbytes and lastTX in this search)? The latter makes the most sense to me. What is strange , is that for some of my instances, I am seeing negative results for the different between the current TXbytes value and the lastTX.... which does not make any sense?
... View more
02-20-2013
02:01 PM
Hi Guys,
I have some confusions around the Interface Throughput calculations.
The following search seems to be finding the average of the DIFFERENCE between the last TX value and the current TX value. What are the TX values representing? The current upload bytes for that poll period, or the accumulated upload bytes for that interface?
index="os" sourcetype="interfaces" host=* | multikv fields name, inetAddr, RXbytes, TXbytes | streamstats current=f last(TXbytes) as lastTX, last(RXbytes) as lastRX by Name | eval time=_time | strcat Name "-" inetAddr "@" host Interface_Host | eval RX_Thruput_KB = (lastRX-RXbytes)/1024 | eval TX_Thruput_KB = (lastTX-TXbytes)/1024 | timechart eval(sum(TX_Thruput_KB)/dc(time)) by Interface_Host
What are we trying to calculate here? Also, is this an accurate representation of bandwidth usage for that interface on a system?
Cheers,
John
... View more
05-17-2012
09:32 AM
I also need to find this out
... View more
04-05-2012
10:19 AM
Im indexing a CSV file and i have SHOULD_LINEMERGE set to "false" so it will break after each new line.
However per 24 hour period (and about 600,000 events), I get ~50 events which are not line broken correctly and have half of the event as a new event - How is this even happening if I have SHOULD_LINEMERGE=false? Isnt the default to break at a new line?
The only think I am thinking is that a small subset of the events in the CSV are broken over two lines? (If that's even possible) Or is there a limit to the amount of characters that Splunk will check for a line break, before it just breaks the event at the limit?? So basically meaning that we had a few very long entries in the CSV file which Splunk didn't check all the way to the end due to a limit of some sort??
... View more
04-04-2012
03:37 PM
I want to know the following in relation to the REST API:
Can we hit endpoints on UFs and LWFs?
What is the REST endpoint to check if an instance is alive?
Can we read a splunk log file from the file system itself using the REST API? EG: On a LWF, where we are not indexing any data, but we are writing to splunk logs files - Is there a way to view/query/tail the log files directly from the REST API?
Thanks!
John
... View more
03-15-2012
05:56 PM
SylviaB - How are you trying to set this up? What sort of error message are you getting to say that it isn't working? I have it working currently using the following syntax
[splunktcp://9998]
persistentQueueSize=100GB
... View more
03-14-2012
04:37 PM
3 Karma
The Missing Forwarders dashboard is telling me that there are x number of missing forwarders which "have not connected in the past 24 hours" - However when I check the detailed results, it tells me that the "last_connected" time for some of the forwarders, is indeed a time which is within 24 hours of the current time?
Can anyone help me out here as to why this is happening?
... View more
02-15-2012
01:31 PM
2 Karma
I am monitoring /etc/hosts.allow and /etc/hosts.deny for change, with a poll period of 300 seconds.
[fschange:/etc/hosts.allow]
index = fschange_main
pollPeriod = 300
[fschange:/etc/hosts.deny]
index = fschange_main
pollPeriod = 300
For some reason, every poll period (5 mins) I get 2 events for each file.... one with "action=add" and another with "action=delete"..... as I said, this keeps happening once per poll period.
Can someone tell me what is wrong? I do not have duplicate fschange stanzas for those files.
Thanks!
John
... View more