Activity Feed
- Posted SQL Monitoring - Splunk SPL to alert on the top users running long running SQL queries on my databases on All Apps and Add-ons. 07-09-2024 10:17 AM
- Posted Two different TIME_PREFIX, one includes json formatted events on Getting Data In. 08-21-2023 01:44 PM
- Posted Re: Want to filter dataset within a log index to a metrics index on Getting Data In. 01-13-2023 01:43 PM
- Posted How to filter dataset within a log index to a metrics index? on Getting Data In. 01-13-2023 11:52 AM
- Posted Help adding an interactive Notes dashboard section on Dashboards & Visualizations. 03-06-2021 01:11 PM
- Tagged Help adding an interactive Notes dashboard section on Dashboards & Visualizations. 03-06-2021 01:11 PM
- Posted Filtering mstats data using eventtypes and tags on Splunk Search. 12-03-2020 07:53 PM
- Posted Re: How to get duration between a start and stop event and trigger an alert if duration is greater than 1 week? on Splunk Search. 06-29-2020 07:25 PM
- Karma Re: Help showing the Uptime in days for a Universal Forwarder for niketn. 06-29-2020 11:44 AM
- Karma Re: Help showing the Uptime in days for a Universal Forwarder for niketn. 06-29-2020 11:44 AM
- Posted Re: Splunk query for UPtime and Downtime? on Splunk Search. 06-28-2020 08:37 PM
- Karma How to calculate uptime percentage based on my data? for rakes568. 06-28-2020 02:26 AM
- Posted Re: How to calculate uptime percentage based on my data? on Splunk Search. 06-28-2020 02:12 AM
- Posted Re: Help showing the Uptime in days for a Universal Forwarder on Dashboards & Visualizations. 06-26-2020 01:29 PM
- Posted Re: Help showing the Uptime in days for a Universal Forwarder on Dashboards & Visualizations. 06-25-2020 02:29 PM
- Posted Help showing the Uptime/downtime percentage for a Universal Forwarder on Dashboards & Visualizations. 06-23-2020 05:13 PM
- Tagged Help showing the Uptime/downtime percentage for a Universal Forwarder on Dashboards & Visualizations. 06-23-2020 05:13 PM
- Karma Re: Infosec: Help with drilldowns from app panel that uses a datamodel for igifrin_splunk. 06-05-2020 12:50 AM
- Karma Re: How to mask a field value from raw events that shows in multiple patterns for woodcock. 06-05-2020 12:50 AM
- Got Karma for Infosec: Help with drilldowns from app panel that uses a datamodel. 06-05-2020 12:50 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
07-09-2024
10:17 AM
SQL Monitoring - I'd like to know how to write a Splunk SPL query to alert on the top users running long running SQL queries on my databases. I'm currently using the MS SQL add-on for Splunk and monitoring the included monitors for Perfmon:sqlserver:* and sourcetypes "mssql:agentlog" and "mssql:errorlog" Thank you in advance!
... View more
08-21-2023
01:44 PM
Hello, I'm trying to create a working props/transforms to separate standard events from json formatted logs (by filtering/resetting the json logs to their own sourcetype). Here's what I've tried so far and I am able to do most of what I want with the exception of timestamp recognition of the json events.. The below trims my json event headers only and filters/resets the json events to their own separate sourcetype. Since the header is trimmed splunk is doing a great job auto extracting my json field value pairs. I'm looking for help on getting the timestamp or _time value to match my json field "log_time". PROPS.conf [mainlog]
MAX_TIMESTAMP_LOOKAHEAD = 25
TIME_PREFIX = (?=[20])|log_time:
SEDCMD-remove-jsonheader = s/^[0-9T\:Z]*.*?\s*{/{/g
TRANSFORMS-set_sourcetype = example_json
[mainlog:json]
TIME_PREFIX = log_time:
#TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ
MAX_TIMESTAMP_LOOKAHEAD = 3
INDEXED_EXTRACTIONS = json TRANSFORMS.conf [example_json]
REGEX = \{\"json\"\:
FORMAT = sourcetype::mainlog:json
DEST_KEY = MetaData:Sourcetype sample log: 2023-08-21 11:59:10 TRACE [pool-12-thread-1] c.a.l.m.e.AbstractElasticSearchBatch$ElasticSearchBatch [Slf4jLogging.scala:13] Deadline time left is 302ms and record count is 72
2023-08-21 11:11:41 TRACE [pool-11-thread-1] c.a.l.m.e.AbstractElasticSearchBatch$ElasticSearchBatch [Slf4jLogging.scala:13] Indexing {"json":"s3://example/logs/2023/08/21/0111111a-2222-33ff-9e4e-c1a01dfdf448.gz","phase":"ingest","log_time":"2023-08-21T15:11:31.073Z","tick":"7777777777","id":"0111111a-2222-33ff-9e4e-c1a01dfdf448","source_time":"2023-08-21T11:11:25Z","status":"submitted","client":"555555","environment":"test","category":"changestream","account":"9","level":7}
... View more
Labels
- Labels:
-
props.conf
-
transforms.conf
01-13-2023
01:43 PM
Thanks @richgalloway I have the conversion part configured but what I'm having trouble with is knowing what [sourcetype] to put the props.conf & transforms.conf under since I'm filtering from an existing index and base sourcetype. Most of the main index data isn't not a canidate to convert from log event to metrics. Normally I would use props and transforms to filter via REGEX to rename the matching data to set it to a new sourcetype. In this case I'm trying to filter my REGEX match for a specific type of dataset, rename the sourcetype if needed, convert the field values to metrics and send this to the new metrics index.
... View more
01-13-2023
11:52 AM
Hello,
I have an existing high volume index and have discovered a chunk of event logs within the index that would be a great canidate to convert to metrics. Can you filter these type of events to send to the metrics index and then convert the events to metrics at index time all using props/transforms?
I have this props.conf
[my_highvol_sourcetype]
TRANSFORMS-routetoIndex = route_to_metrics_index
Transforms.conf
[route_to_metrics_index]
REGEX = cpuUtilization\=
DEST_KEY=_MetaData:Index
FORMAT = my_metrics_index
But now what sourcetype do I use to apply the event log to metrics conversion settings? Should I filter this dataset to a new sourcetype within my high volume index so I can apply my event log to metrics to all events matching the new sourcetype then filter to the metrics index?
Any thoughts would be helpful to see if something like this is possible to do using props/transforms.
... View more
Labels
- Labels:
-
props.conf
-
transforms.conf
03-06-2021
01:11 PM
I'm looking for help on how to output the contents of my dashboard textbox to a kv lookup. I'm hoping to display and collect the timestamp, user who created the note entry, and the notes in the panel below. As a bonus, I would also like to make the existing notes entries editable in my dashboard panel and capture the user who edited and timestamp in another column.
... View more
- Tags:
- dashboard
Labels
- Labels:
-
form
-
javascript
-
panel
12-03-2020
07:53 PM
I'm looking for help to filter my mstats data using eventtype OR tag I've created for groups of hosts.. Here's an example of my CPU metrics dashboard panel | mstats avg(_value) as value where `nmon_metrics_index` metric_name=os.unix.nmon.cpu.cpu_all.Sys_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.User_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.Wait_PCT host=$host$ groupby metric_name, host span=1m
| `def_cpu_load_percent` | timechart `nmon_span` avg(cpu_load_percent) AS cpu_load_percent by host useother=false I've tried appending a non-metrics subsearch to search against the metric data using my tag AND host so that only the selected hosts return in my panel index = example_index (eventtype=test1 OR eventtype=test2 OR eventtype=test3)
| search (host=* AND tag = test2)
| append
[ | mstats avg(_value) as value where `nmon_metrics_index` metric_name=os.unix.nmon.cpu.cpu_all.Sys_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.User_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.Wait_PCT host=dac51elo.pjm.com groupby metric_name, host span=1m
| `def_cpu_load_percent` ] | timechart `nmon_span` avg(cpu_load_percent) AS cpu_load_percent by host useother=false
... View more
06-29-2020
07:25 PM
@niketn this seems very similar to how I'm trying to calculate uptime/downtime percentage by host last 7 days and last 30 days on my question here : https://community.splunk.com/t5/Dashboards-Visualizations/Help-showing-the-Uptime-downtime-percentage-for-a-Universal/td-p/505849 You seem like you have a lot of experience on this topic, appreciate your help in advance!
... View more
06-28-2020
08:37 PM
I've been trying to work with this same query to calculate the difference (_time of Action = "Splunkd Starting" minus _time of Action = "Splunkd Shutdown) to show downtime by host. Then sum the total downtime by host for the past 7 days. The end result I'm hoping for is to show percentage of UpTime by host past 7 days and also chart total percentage of uptime past 7 days for all hosts. index=_internal source="*SplunkUniversalForwarder*\\splunkd.log" (event_message="*Splunkd starting*" OR event_message="*Shutting down splunkd*") | eval Action = case(like(event_message, "%Splunkd starting%"), "Splunkd Starting", like(event_message, "%Shutting down splunkd%"), "Splunkd Shutdown")
... View more
06-28-2020
02:12 AM
What did you end up doing for this? I'm trying to do the same calculation but I'm trying to use the index=_index source=*splunkd.log (event_message="*Splunkd starting*" OR event_message="*Shutting down splunkd")
... View more
06-26-2020
01:29 PM
@niketn The rest command you recommended looks like it's meant for the deployment server. I'm using Splunk Cloud and don't have any on-prem deployment server so I've tried using the index = _internal source=*splunkd.log to monitor if my UFs are online ... I'm looking to show a % of Uptime for the past 7 days, looking for help on how you may subtract timestamps for two different values to show how long a host was down and then sum the total of that downtime divided by 7 days. Also open to suggestions for a better way to calculate this. index=_internal source="*SplunkUniversalForwarder*\\splunkd.log" (event_message="*Splunkd starting*" OR event_message="*Shutting down splunkd*") | eval Action = case(like(event_message, "%Splunkd starting%"), "Splunkd Starting", like(event_message, "%Shutting down splunkd%"), "Splunkd Shutdown") | stats count by host, _time, Action This query returns the host and timestamp for when splunkd shut down and another event with timestamp when Splunkd started. This query returns | stats values(Action) as Action by host, _time
... View more
06-25-2020
02:29 PM
Since my hosts are Windows based I found this query to be helpful to show Uptime : index=wineventlog host=* source="WinEventLog:System" EventCode=6013
| rex field=Message "The system uptime is (?<SystemUpTime>\d+) seconds."
| dedup host
| eval DaysUp=round(SystemUpTime/86400,2)
| eval Years=round(DaysUp/365,2)
| eval Months=round(DaysUp/30,2)
| table host DaysUp Years Months SystemUpTime
| sort host(index=wineventlog sourcetype=”WinEventLog:System” EventCode=6013)
| search DaysUp > 0
| strcat DaysUp " Days" UpTime
| sort - DaysUp
| table host UpTime
| fields - Years, Months, SystemUpTime
... View more
06-23-2020
05:13 PM
Hello, I'm looking for help showing the Uptime/downtime percentage for my Universal Forwarders (past 7 days) : I've seen many people trying to solve a similar use case on Answers but haven't quite seen what I'm looking for yet.. I've been testing the below query and my thinking was to calculate the difference in minutes between a host's timestamp for eval field Action = "Splunkd Shutdown" - "Action = "Splunkd Starting". Then sum the total in minutes divided by the total minutes in 1 week (10080) to get the uptime? There are problems with this logic though because if the last time a host shutdown is not within your search window you won't get an accurate metric. I'm open to a discussion to see how this can be monitoring most accurately. This query returns the host and timestamp for when splunkd shut down and another event with timestamp when Splunkd started. index=_internal source="*SplunkUniversalForwarder*\\splunkd.log" (event_message="*Splunkd starting*" OR event_message="*Shutting down splunkd*") | eval Action = case(like(event_message, "%Splunkd starting%"), "Splunkd Starting", like(event_message, "%Shutting down splunkd%"), "Splunkd Shutdown")
| stats count by host, _time, Action
... View more
03-26-2020
11:04 AM
Here's the error I'm getting when trying to save the data input configuration even though I can return events when executing the query
... View more
03-26-2020
11:03 AM
I'm trying to work with a data input using DB Connect version 3.0 and I cannot get the below input to save using the field alias 'time' that using this format :
2020-03-21 00:11:12.387
Based off this article I added these configurations to my stanza to help DB Connect identify the correct timestamp format :
input_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS
output_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS
*The LogEntryId is my rising column and returns as column #1
*The time column/Timestamp returns as column #2
I've also uses the below Answers suggestion to try to resolve the NULL values possible issue :
https://answers.splunk.com/answers/616150/how-to-force-dbconnect-to-send-fields-with-null-va.html
[TestDB_2]
connection = TestDB
description = Test Query
disabled = 0
index = main
interval = */5 * * * *
max_rows = 1000
mode = advanced
output_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS
query = SELECT le.LogEntryId AS [LogEntryId]
, [Date] AS [time]
, l.[Name] AS [Level]
, at.Name AS [Application Source]
, le.Logger AS [Logger]
, le.[Message] AS [Message]
, COALESCE(le.FullMessage, 'NA') AS [FullMessage]
, COALESCE(le.Exception, 'NA') AS [Exception]
, COALESCE(le.FullException, 'NA') AS [Full Exception]
FROM "Logging"."dbo"."LogEntry" le
JOIN "Logging"."dbo"."LevelType" l
ON l.LevelTypeId = le.LevelTypeId
JOIN "Logging"."dbo"."ApplicationSourceType" at
ON at.ApplicationSourceTypeId = le.ApplicationSourceTypeId
WHERE le.LogEntryId > '?'
AND le.LevelTypeId IN (3,4,5) -- WARN, ERROR, FATAL
AND at.[Name] != 'developer.example.com'
ORDER BY le.LogEntryId DESC;
sourcetype = Test
tail_rising_column_number = 1
input_timestamp_column_number = 2
input_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS
index_time_mode = dbColumn
... View more
12-08-2019
03:55 PM
Thank you, that example didn't seem to work for me but this did :
SEDCMD-anon = s/(bankRoutingNumber|bankAccountNumber)\"(:)+\"(\w+)/\1":"XXXXXXXXXXX/g
... View more
12-08-2019
02:42 PM
So there are several different formats where the policy number shows throughout this log. I have a transforms.conf to filter for this particular format and send it to a different sourcetype :
transforms.conf
[test_sourcetype_CSA]
REGEX = \sservice\.CSAServiceImpl\s\(CSAServiceImpl\.
FORMAT = sourcetype::test:CSA
DEST_KEY = MetaData:Sourcetype
Sample log :
2019-12-03 15:17:32,57 DEBUG [ajp-/0.0.0.0:8209-16] service.CSAServiceImpl (CSAServiceImpl.java:89) []
- CSA Request object in debug: { "policyNumber":"2L77755540","bankAcctType":"Checking","bankRoutingNumber":"222111333","bankAccountNumber":"22222444888"}}
Then I was trying to apply this under the
[test:CSA]
SEDCMD-anon = s/\"policyNumber\":\"(\w+)/policyNumber"XXXXXXXXXXXXXXX/g s/bankRoutingNumber\":\"(\d+)/bankRoutingNumber\":\"XXXXXXXXX/g s/bankAccountNumber\":\"(\d+)/bankAccountNumber\":\"XXXXXXXXXXX/g
... View more
12-08-2019
01:37 PM
@woodcock
I have another format in my log that I'm trying to mask but I've tried several combinations and not having luck with this format.
"bankAcctType":"Saving","bankRoutingNumber":"55522244","bankAccountNumber":"11133344444","accountHolderName":"John","AccountLastName"Doe"","signature":null,"additionalComments":""}}
I've tried applying these props.conf to the data to mask, help appreciated :
[test:log]
CHARSET = UTF-8
LINE_BREAKER = ([\r\n]+)\d{2,4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\,\d{1,3}\s\w+\s+[
MAX_TIMESTAMP_LOOKAHEAD = 25
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
TIME_PREFIX = ^
category = Splunk App Add-on Builder
pulldown_type = 1
KV_MODE = none
NO_BINARY_CHECK = true
disabled = false
BREAK_ONLY_BEFORE_DATE =
DATETIME_CONFIG =
SHOULD_LINEMERGE = false
SEDCMD-anon = s/(bankAcctType\":\")(\w+/)/XXXXXXXXX\2/g s/(bankRoutingNumber\":\")(\d+)/XXXXXXXXX\2/g s/(bankAccountNumber\":\")(\d+)/XXXXXXXXXXX\2/g
... View more
12-08-2019
01:29 PM
1 Karma
there was an issue with my REGEX. This did the trick:
REGEX = (SELECT|Select|select)\s+
DEST_KEY = queue
FORMAT = nullQueue
... View more
12-06-2019
07:58 AM
Right now, I'm building the add-on in my single instance test environment.
"applicationone:log" is the name I picked for the data sourcetype.
... View more
12-05-2019
02:34 PM
I'm trying this on a single test instance. After I make a change to my configs, I delete the data from the index and restart the instance. I then upload the data again to apply my updated configs against it.
... View more
12-05-2019
01:46 PM
@harsmarvania57 I tried that and it still isn't working. Could it be a problem with the sourcetype I using, does it need to be applied to _raw log data?
... View more
12-05-2019
01:15 PM
I'm trying to filter out unwanted data but it's not working using my current stanzas in props & transforms. However, I was able to filter using the regex and reset the sourcetype so that should rule out an issue with the regex I'm attempting to use..
sample_log for applicationone :
2019-12-03 00:59:57,812 stdout INFO [ajp-/0.0.0.0:8009-16]: Hibernate: select sample.SAMPLE_ID as SAMPLE_ID1_5_, SAMPLE0_.sample_DESCRIPTION as sample_DESCRIPTI2_5_ from sample_SAMPLE functional0_
props.conf
[applicationone:log]
TRANSFORMS-sendtonull = removeDBqueries
transforms.conf
[removeDBqueries]
REGEX = select\s+.*)
DEST_KEY = queue
FORMAT = nullQueue
... View more
12-01-2019
09:44 AM
1 Karma
@woodcock I added a 2nd capture group and now it's masking all my policyNumbers. Thank you!
Here's what I used in case this helps someone else :
SEDCMD-policyNumber_mask = s/(policy[^-=]+[-=]\s+)\w+/\1XXXXXXXXXXXXXXX/g s/(policyNumbers[^-]+\s+)\w+/\2XXXXXXXXXXXXXXX/g
... View more
11-30-2019
01:01 PM
It is for most events except the log from my last update, any thoughts on why it didnt apply to this one
... View more
11-30-2019
09:26 AM
Thanks @woodcock ! It's masking every policyNumber exception logs with this format :
2019-11-25 07:51:39,659 INFO [ajp-/0.0.0.0:8209-17] security.SAMLAuthSuccessHandler (SAMLAuthSuccessHandler.java:104) []
- policyNumbers (filtered) for the user are ----------------------- 3T00005555
... View more