Activity Feed
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-28-2025 06:43 AM
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 11:14 AM
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 10:27 AM
- Posted Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 06:58 AM
- Posted Re: Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 07:32 AM
- Posted Re: Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 06:51 AM
- Posted Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 06:30 AM
- Posted Re: block any search for index=* with workload on Getting Data In. 12-03-2024 06:52 AM
- Posted Why are scheduled searches defaulting to other and causing wrong cron timezone? on Alerting. 04-11-2023 07:30 AM
- Posted Re: Splunk Add-on for AWS Issues with Kinesis Pull on All Apps and Add-ons. 07-21-2022 05:41 AM
- Posted How to Work Around Distinct 10K Limit on Splunk Search. 06-01-2022 07:19 AM
- Posted Re: Dynamically Subtract Two Last Column Values on Splunk Search. 05-02-2022 10:28 AM
- Posted How to dynamically subtract two last column values? on Splunk Search. 05-02-2022 08:33 AM
- Posted Re: How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-29-2022 07:34 AM
- Posted How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-25-2022 12:40 PM
- Tagged How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-25-2022 12:40 PM
- Posted Re: How to set an alert to fire based on lookup table value? on Splunk Search. 02-23-2022 01:38 PM
- Posted How to set an alert to fire based on lookup table value? on Splunk Search. 02-23-2022 06:17 AM
- Posted Re: Reading complexed nested Json on Splunk Search. 02-16-2022 08:36 AM
- Posted Re: Reading complexed nested Json on Splunk Search. 02-16-2022 07:54 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
11-16-2015
12:11 PM
Thanks. 2.0.6
Chris
... View more
11-16-2015
07:42 AM
Yep, I removed the extra stuff and just have:
[*]
Keep getting 11/16/2015 10:12:39 [WARNING] [health.py] The user [foobar] is not allowed to use health logger.
My healthlog.conf in the default directory looks like this:
[default]
## PY - any python function health logger
## DB - dbx2 python proxy health logger
## JP - java dbx2 proxy health logger
# do not leave spaces within items
loggers = PY,DB,JP
hiddens = SESSION_KEY,QUERY
[admin]
hiddens = SESSION_KEY
[nobody]
Any other ideas?
Tx
Chris
... View more
11-16-2015
06:11 AM
HI, do you mind sharing your exact syntax?
i added:
[*]
hiddens=SESSION_KEY
to: Splunk\etc\apps\splunk_app_db_connect\local\healthlog.conf and I still have the same issue.
Thanks.
Chris
... View more
10-30-2015
07:35 AM
HI,
I have a few large directories that take a long time for Splunk to start indexing after a restart. Is there an ability to provide a priority on which stanzas Splunk should start indexing first over others? Some of my file monitoring stanzas are nice to have and others are critical. I would like to see the Criticals indexed first.
Thank you,
Chris
... View more
09-17-2015
09:08 AM
Hmm, that did work for me. In testing I have the following field in EPOCH, that I know is "2015-09-15T13:56:00"
CreatedDate:1442339760000
Doing the following search:
basesearch |eval mytime=strftime(CreatedDate,"%c")
"mytime" came back as "Fri Dec 31 23:59:59 9999"
Thanks
Chris
... View more
09-14-2015
06:25 AM
Thanks. I'm not sure this solves my issues. Anyway, I did find a work around by doing calculated fields that override the same field name that used to display EPOCH time. Like below is a MS SQL Datetime stamp field:
strftime(TransactionInterval/1000,"%Y-%m-%dT%H:%M:%S.%3N")
Unfortunately, I have to do this for every date or time field and remove microseconds or something else if its a different type of MS SQL date format. Seems like there could be a config that says display EPOCH or human readable.
Tx
Chris
... View more
09-03-2015
08:37 AM
2 Karma
Hi,
How can I get Splunk DB Connect 2 to display datetime, date or time columns as human readable (not epoch) for Microsoft SQL Server? I have used DBX via JDBC and the date/time columns all come back as human readable. I saw a few posts to cast the date/time, but this seems clunky. Seems there has got to be a config somewhere for this change...
Thank you,
Chris
... View more
09-02-2015
09:36 AM
1 Karma
Hmm, I tried this and its not working. Something wrong with my syntax? I did have to put in a second "search" after the "[" for the subquery to get past a splunk error thown ("Unknown Search Command index").
index=myindex source=mysource Description=mydescription | eval eid=_cd | search [ search index=myindex source=mysource Description=mydescription | streamstats count by _raw | search count>1 | eval eid=_cd | fields eid]
Thank you!
Chris
... View more
08-25-2015
11:26 AM
I'm baffled. Seems like it could be related to any file directly on the indexer. So now I'll just focus on splunk.log
Running the tail process I see:
file position 12955579
file size 12955579
parent $SPLUNK_HOME\var\log\splunk\rpc.log*
percent 100.00
type open file
Seems like it always stays open with subsequent calls to the tail process several minutes later and the file position does not change.
When I search the indexer (index=_internal source=D:\srvapps\splunk\var\log\splunk\splunkd.log) the last event in the indexer was 40 minutes old, but when I look directly at the file on the server, I clearly see events as recent as a few minutes ago.
When I restart Splunkd, evertything works fine for an hour or so, then stops again. Any events from the forwarder, DBX or monitored UNC path files are indexed in a timely manner.
Any where else I should look?
Thanks
Chris
... View more
08-24-2015
07:29 AM
Hi,
I have been banging my head for a while. I have a couple of flat files that are a monitored input directly on the indexer. The events just stop getting to the indexer (I assume because they do not show up in a search), but I can clearly see in the flat file events coming in
[monitor://D:\SrvApps\Splunk\etc\apps\output\metrics.log]
disabled = false
crcSalt = <SOURCE>
index = myindex
sourcetype = metrics
alwaysOpenFile = 1
recursive = false
Simple inputs.conf above, tried crcSlat and alwaysOpen. Now if I put this monitor on a Forwarder, the events are quickly indexed. I can't see what is going on. I tried S.O.S. and didn't see anything standing out. Also tried a few tips from here, http://wiki.splunk.com/Community:Troubleshooting_Monitor_Inputs
Any suggestions how I can narrow down what Splunk is doing? I'm on Windows, running latest 6.X
Thanks
Chris
... View more
08-21-2015
10:24 AM
Thank you for your reply. This made me dig more and I noticed that some directory's were being archived. Then with more reading I saw
recurse = [true|false]
* If true, recurse directories within the directory specified in [fschange].
* Defaults to true.
Hence, changing recurse=false, really reduced the i/o read time.
Thanks!
Chris
... View more
08-20-2015
11:41 AM
Hi,
I have some very large directorys. Here is my input.conf
[monitor://\\server\folder]
disabled = false
host = myhost
index = mylogs
sourcetype = mytasks
ignoreOlderThan = 2d
whitelist = (MYTasks\[EXPORT.*.log|MYTasks\[IMPORT.*.log)
When I check the number of files (Data inputs » Files & directories), I see 3840 files. Because of the Whitelist and modtime, most are ignored. Question, is this common? Seems inefficient for Splunk to monitor "skipped" files, I'm not sure if it has to re-read/touch these at every restart. I get a ton of files listed in "services/admin/inputstatus/TailingProcessor:FileStatus". If this is normal I'll move on, just trying to make sure my instance is performing optimally.
Thank you,
Chris
... View more
- Tags:
- large
07-10-2015
07:06 AM
Thanks. db_connect_admin is part of the admin role. The user is me, "admin". Admin role shows the below.
Imported capabilities
db_connect_create_connection
db_connect_create_dblookup
db_connect_create_identity
db_connect_create_resource_pool
db_connect_delete_connection
db_connect_delete_dblookup
db_connect_delete_identity
db_connect_delete_resource_pool
db_connect_execute_query
db_connect_read_app_conf
db_connect_read_connection
db_connect_read_dblookup
db_connect_read_identity
db_connect_read_resource_pool
db_connect_read_rpcserver
db_connect_request_metadata
db_connect_request_status
db_connect_update_connection
db_connect_update_dblookup
db_connect_update_identity
db_connect_update_resource_pool
db_connect_update_rpcserver
db_connect_use_custom_action
db_connect_write_app_conf
Does not make sense....
Chris
... View more
06-29-2015
09:59 AM
4 Karma
HI,
Can't view health logger in Splunk DBX 2.x and I see the following error:
06/29/2015 11:48:21 [WARNING] [health.py] The user [foobar] is not allowed to use health logger.
Dug around and I can't see where to set the permissions for this script. Any ideas?
Thanks
Chris
... View more
06-10-2015
03:45 PM
Hi,
Running DBX 2.0. How do I reload data from a certain check point value (reset a time stamp or rising column value). I thought I could just modify the following in inputs.conf, but it keeps getting overwritten.
tail_rising_column_checkpoint_value = 1433976020000
Is there another conf file I could use?
Thanks
Chris
... View more
06-04-2015
01:18 PM
This didn't work. it made it close, but did not group the Place. For the same place, it duplicated a row for each SubTotal.
This is what I'm looking for on one row:
Place, SubTotal1, SubTotal2, SubTotal3, Grand Total.
Thanks for your help.
Chris
... View more
06-04-2015
12:45 PM
Hmm, but I can't figure out how to do that in a search query.
For example, considering I have 3 sales types for each Place named, subTotal1, SubTotal2 and SubTotal3, executing this query will return
mysearch | lookup salesID_lookup SalesID as SalesID OUTPUT Place, SalesType | stats sum(SalesRevenue) as SalesTypeTotal by SalesType
SubTotal1 1000
SubTotal2 1200
SubTotal2 1100
What I want is:
Place, SubTotal1, SubTotal2, SubTotal3, Grand Total.
Where Grand Total is the total of all the SubTotals for each Place. Hope I'm explaining it correctly.
Thank you
Chris
... View more
06-04-2015
12:03 PM
HI,
Can't seem to get this working. This is what I want, so I can do a multi stacked bar chart.
Columns:
Place, SubTotal 1, SubTotal 2, SubTotal 3, Grand Total.
My lookup table will have 3 rows for each place.
Place, SubPlace 1
Place, SubPlace 2
Place SubPlace 3
I have a search where I find sales amount by each SubPlace:
mysearch| lookup sales_lookup SalesID as SalesID OUTPUT Place, SalesType | stats sum(SalesRevenue) as SalesTypeTotal by SalesType
I can't figure out how to have it all on one row so I have : Place, SubTotal 1, SubTotal 2, SubTotal 3, Grand Total.
Any ideas? This should be easy ...
Chris
... View more
06-01-2015
07:11 AM
Sorry guys, this is old. I dont have an issue anymore..
Thank you,
Chirs
... View more
05-20-2015
09:39 AM
Driving me batty,
With the source name of :
\server001\folder$\MyLogService150515-03.log
I did:
[source::\\server001\folder$\MyLogService*.log]
Still no go. grrr.
Chris
... View more
05-20-2015
07:52 AM
Thanks, I tried that and its still not working. This was a typo with me masking the real text. I validate my regex here: https://regex101.com/#python to make sure my entire source is captured.
Baffled....
Chris
... View more
05-20-2015
06:47 AM
I also changed the source to a full regex. Tested the regex is working correctly. Still not applying the Transforms. I can only get the Transforms to work by using the the sourcetype, baffled with source is not working.
In Props:
[source::.server\d+.folder\$.MyLogService\d+-\d+\.log]
TRANSFORMS-grtrash = setnull , setparsing, badError, badError2
The source:
\server001\folder$\MyLogService150515-03.log
Thanks
Chris
... View more
05-20-2015
03:32 AM
Hi,
I have multiple sources to one sourcetype. I'm trying to drop events and my props and transforms work fine by the sourcetype. However, I want to have different rules by sourcetype.
in Props.conf
[source::MyLogService*.log]
TRANSFORMS-grtrash2 = eliminate-debug
in Transform.conf
[eliminate-debug]
REGEX = (?m)-\s*DEBUG\s*-
DEST_KEY = queue
FORMAT = nullQueue
I've tried different combinations of defining the "source" and props.conf and nothing is working. Real source looks like:
\server\logfolder\MyLogService150520-01.log
Any ideas?
Thank you!
Chris
... View more
05-14-2015
04:27 AM
Thank you very much for the response. I'm going to dig into both suggestions!
Chris
... View more