All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

good afternoon   I would like to know which is the index that has had less access at the data query level. regards
I am using Splunk Enterprise and wish to automatically forward events to Phantom. I am able to send events to Phantom with a saved search using the Phantom add-on. However, to send events to Phantom,... See more...
I am using Splunk Enterprise and wish to automatically forward events to Phantom. I am able to send events to Phantom with a saved search using the Phantom add-on. However, to send events to Phantom, I have to manually press the "Send to Phantom" button. Is there a good method to automate this? The Phantom add-on has an alert action to create an event in Phantom, but the add-on's README says this functionality is only enabled for Splunk Enterprise Security.
Need to know if security vulns are patched in add-ons too.
It's inconvenient that a Splunk deployment-server cannot reload an 'app'. It only reloads a 'serverclass'. Here's a shortcut. The following bash script takes the name of a deployment-app as a pa... See more...
It's inconvenient that a Splunk deployment-server cannot reload an 'app'. It only reloads a 'serverclass'. Here's a shortcut. The following bash script takes the name of a deployment-app as a parameter. It will reload serverclasses that use the deployment-app. Copy the script to your Deployment Server to make it easier to push out app changes to clients from the CLI. #!/bin/bash if [ $# -lt 1 ]; then echo 1>&2 "$0: not enough arguments" exit 2 elif [ $# -gt 1 ]; then echo 1>&2 "$0: too many arguments" exit 2 elif (! (grep -E -q "^\[serverClass:[^:]*:app:$1" /opt/splunk/etc/system/local/serverclass.conf )); then echo 1>&2 "$0: No serverclasses found for $1" exit 0 fi grep -E "^\[serverClass:[^:]*:app:$1" /opt/splunk/etc/system/local/serverclass.conf | cut -d':' -f2 | sed -E 's/^(.*)$/ -class \1/g' | tr -d '\n' | xargs echo sudo -H -u splunkserviceaccount /opt/splunk/bin/splunk reload deploy-server -auth admin:changeme read -p "Press [Enter] to execute above command..." grep -E "^\[serverClass:[^:]*:app:$1" /opt/splunk/etc/system/local/serverclass.conf | cut -d':' -f2 | sed -E 's/^(.*)$/ -class \1/g' | tr -d '\n' | xargs sudo -H -u splunkserviceaccount /opt/splunk/bin/splunk reload deploy-server -auth admin:changeme
Hello, At the moment, don't have access to the Citrix logs; only Windows Logs (Sec/App/Sys). Does anyone know how I can use these events to figure out how many users log in to the Citrix environmen... See more...
Hello, At the moment, don't have access to the Citrix logs; only Windows Logs (Sec/App/Sys). Does anyone know how I can use these events to figure out how many users log in to the Citrix environment? In the attached screenshot is the first 20 or so events that occur during a connection to the Citrix server. I don't understand why there are 6 sets of logons/logoffs (Logon_Type=8) before the user is able to select and app/rdp from their landing page (Logon_Type=10). Is this normal behavior? Thanks and God bless, Genesius
What my search is trying to do is whenever the search matches an item in the lookup list it should display the results which I can turn into an alert. however, it is not working or displaying result... See more...
What my search is trying to do is whenever the search matches an item in the lookup list it should display the results which I can turn into an alert. however, it is not working or displaying results and I cant figure out why (index=cisco* OR index=proxy) dest_ip="" OR domain="" | rename dest_ip as emotet_ip | rename domain as emotet_domain | where [| inputlookup emotet-lookup | fields emotet_ip , emotet_domain] | stats values(emotet_ip) as emotetIP, values(emotet_domain) as emotetDomain
I have a rex statement that parses multiple events and extracts the servers and its state:, something like below. index="index-name" "keyword" instance="https://jenkins-*com" |rex field=_raw "}\s(... See more...
I have a rex statement that parses multiple events and extracts the servers and its state:, something like below. index="index-name" "keyword" instance="https://jenkins-*com" |rex field=_raw "}\s(?\d[-+]?[0-9]*.?[0-9]+)"| dedup 1 instance the above query returns as below Name state instance1 1.00 instance2 0.00 instance3 1.00 .... so on I add eval statements after this query to check if specific instance and state is matched. this works, but the eval command gets repeated for all the occurrences of "instances"., like the following. Name state eval_output instance1 1.00 yes instance2 0.00 no instance3 1.00 yes But, what i would like to achieve is to break the looping, meaning after eval command is executed for all instances, i add another eval statement which just uses the output and not adding it to all instances. how can i achieve this? I have this problem while using svg app.
Either my instance is misconfigured - or the documentation has an error... or both? Per "Access the master dashboard": The master dashboard contains these sections: - Cluster overview - P... See more...
Either my instance is misconfigured - or the documentation has an error... or both? Per "Access the master dashboard": The master dashboard contains these sections: - Cluster overview - Peers tab - Indexes tab - Search Heads tab Also, same source: Use the monitoring console to view status You can use the monitoring console to monitor most aspects of your deployment, including the status of your indexer cluster. The information available through the console duplicates much of the information available on the master dashboard. Note how it says "you can use..." (not "you must"), and that the console's info is supposed to duplicate much of what's on the master dashboard. Not the case in my instance - "Master Dashboard" ( Settings - Distributed Environment - Indexer Clustering ) is only showing "Clustering: Search Head": What am I doing wrong? P.S. Out topology is two sites, three clustered indexers per site, one search head and master node per site. P.P.S. Navigating to Indexing - Indexer Clustering in Splunk Monitoring Console does show what "Access the master dashboard" says it's supposed to show. If that's intended, the questions are: - should "Access the master dashboard" article say that when there is a "Monitoring Console", their instructions don't apply? (It does say I could use the Monitoring Console, but does not say I must...) - shouldn't the "Master Dashboard" in the main instance (not the Monitoring Console) refer me to the Monitoring Console for "indexer clustering"? Thanks!
In regards to the transaction command, what are orphaned events and evicted events? Is there a way to filter out logs which were not combined with other logs after using the transaction command?
I have this scenario: log 1: contains - message: "app started" _time: 1234 log 2: message: "ended" _time: 1235 rex to extract app from log1 and name it app|eval start_time=strftime... See more...
I have this scenario: log 1: contains - message: "app started" _time: 1234 log 2: message: "ended" _time: 1235 rex to extract app from log1 and name it app|eval start_time=strftime(_time, "%d-%m-%Y %H:%M:%S") | rex to extract ended from log2 and name it app1|eval end_time=strftime(_time, "%d-%m-%Y %H:%M:%S")| stats values(app) AS app values(app1) as app1 values(start_time) values(endtime) by _time So when I extracted value of message and time in both logs, I end up in a situation with something like: app app1 start_time end_time A 1234 1234 A 1235 1235 What I am looking for this: app app1 start_time end_time A A 1234 1235 The first occurence of A in app field will be the start details and the first occurence of A in app1 will have the end_time and both should be on the same row. After that, go to the next row and repeat for other occurence of A or what ever is in app field and app1 field in the same way. I would like your help on this. Thanks
I have some data that is being forwarded to another entity via our heavy forwarders and I am trying to monitor that stream to ensure it doesn't fail or go too high or low. The below query is a ste... See more...
I have some data that is being forwarded to another entity via our heavy forwarders and I am trying to monitor that stream to ensure it doesn't fail or go too high or low. The below query is a stepping stone toward some other graphing that I want to do, but I need to solve the issue where my charted data stops when the feed goes to zero (aka dies). To be clear, it is the source feed going to my HF on my side that has died, not the HF itself. I know this because there are multiple feeds and only one is down. The others are fine. index=myindex sourcetype=mysourcetype group=per_sourcetype_thruput series=myfeed | bin _time span=1d | stats sum(ev) as dailyEv by _time sourcetype | streamstats time_window=30d avg(dailyEv) as avgev stdev(dailyEv) as standardDev by sourcetype | eval lowerBound=(avgev-(standardDev*2)) | eval upperBound=(avgev+(standardDev*2)) | eval isOutlier=if(dailyEv < lowerBound OR dailyEv > upperBound, 1, 0) | table _time,dailyEv,lowerBound,upperBound,isOutlier I am watching a rolling 30d worth of data but when the event count [sum(ev)] goes to zero on calendar day 22, the graph stops at calendar day 21, even though today is calendar day 26. I have tried to a couple of iterations of fillnull statements against the ev and dailyEv variables without success. I believe the issue may be related to streamstats and the fact that the _time field may be missing and required when the events are no longer seen in myfeed. Any thoughts on how to get the table to show zero values when myfeed dies so that I can potentially alert on isOutlier?
I have a custom app built for a team and I want them to be able to create dashboards that is shared by default with the app instead of private. Here is the role capability for the users, they were cr... See more...
I have a custom app built for a team and I want them to be able to create dashboards that is shared by default with the app instead of private. Here is the role capability for the users, they were created to be very basic as majority of the users barely understand Splunk. User capapbilities: export_results_is_visible get_metadata get_typeahead pattern_detect rest_apps_view rest_properties_get rest_properties_set search Any suggestions or pointers would be greatly appreciated. Thanks Everyone.
logs from an email server throws multiple events (each a different detail) for one email and each event has a numerical value field (MID). Each email has a unique MID I need to extract the recipien... See more...
logs from an email server throws multiple events (each a different detail) for one email and each event has a numerical value field (MID). Each email has a unique MID I need to extract the recipient from the events and subject containing "Password Expiring Soon" where the MID matches in each event im trying this to get the data but its terribly slow and missing and not tabling correctly index=email MID=* | join type=left MID [search index=email subject="Password Expiring Soon"]|join type=left MID [search index=email recipient=*] Please assist
I'm trying to work with a data input using DB Connect version 3.0 and I cannot get the below input to save using the field alias 'time' that using this format : 2020-03-21 00:11:12.387 Based o... See more...
I'm trying to work with a data input using DB Connect version 3.0 and I cannot get the below input to save using the field alias 'time' that using this format : 2020-03-21 00:11:12.387 Based off this article I added these configurations to my stanza to help DB Connect identify the correct timestamp format : input_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS output_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS *The LogEntryId is my rising column and returns as column #1 *The time column/Timestamp returns as column #2 I've also uses the below Answers suggestion to try to resolve the NULL values possible issue : https://answers.splunk.com/answers/616150/how-to-force-dbconnect-to-send-fields-with-null-va.html [TestDB_2] connection = TestDB description = Test Query disabled = 0 index = main interval = */5 * * * * max_rows = 1000 mode = advanced output_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS query = SELECT le.LogEntryId AS [LogEntryId] , [Date] AS [time] , l.[Name] AS [Level] , at.Name AS [Application Source] , le.Logger AS [Logger] , le.[Message] AS [Message] , COALESCE(le.FullMessage, 'NA') AS [FullMessage] , COALESCE(le.Exception, 'NA') AS [Exception] , COALESCE(le.FullException, 'NA') AS [Full Exception] FROM "Logging"."dbo"."LogEntry" le JOIN "Logging"."dbo"."LevelType" l ON l.LevelTypeId = le.LevelTypeId JOIN "Logging"."dbo"."ApplicationSourceType" at ON at.ApplicationSourceTypeId = le.ApplicationSourceTypeId WHERE le.LogEntryId > '?' AND le.LevelTypeId IN (3,4,5) -- WARN, ERROR, FATAL AND at.[Name] != 'developer.example.com' ORDER BY le.LogEntryId DESC; sourcetype = Test tail_rising_column_number = 1 input_timestamp_column_number = 2 input_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS index_time_mode = dbColumn
Hi, I have a CSV file as lookup table which contains IP address and timestamp as fields. I need to perform a search in an index which filters out results with matching IPs and timestamps in the lo... See more...
Hi, I have a CSV file as lookup table which contains IP address and timestamp as fields. I need to perform a search in an index which filters out results with matching IPs and timestamps in the lookup table. I can filter out events with matching IPs with the following search string: index = index [|inputlookup lookuptable.csv | table src_a | rename src_a as src] The thing I just can't figure out is how could I match events with _time field and timestamp field in the lookup table. Timestamps in the file follow the same format as _time, for example, 2020-02-24T12:10:10.000+02:00 What should I add to the search string to match timestamps as well? Thanks in advance!
Hello Team, from below words I would like to get only value 497 and that has to be timechart with actual value, how do I do that? G1 Young Generation GC in 497ms. Log print: WARN [Servic... See more...
Hello Team, from below words I would like to get only value 497 and that has to be timechart with actual value, how do I do that? G1 Young Generation GC in 497ms. Log print: WARN [Service Thread] 2020-03-26 16:45:30,391 GCInspector.java:282 - G1 Young Generation GC in 497ms. G1 Eden Space: 683671552 -> 0; G1 Old Gen: 2290144840 -> 2009072128; G1 Survivor Space: 67108864 -> 62914560;
I have a file that I am monitoring has time in epoch format milliseconds .What setting should be placed in the props.conf to convert it to human readable
So I have some data in the format of Time | UUID | event_name_status | actual_important_log_time ----------------------------------------------------------... See more...
So I have some data in the format of Time | UUID | event_name_status | actual_important_log_time --------------------------------------------------------------------------------------------------------------- 2020-03-26T12:00:00 | 123456789 | car_end | 2020-03-25T16:50:30 2020-03-26T12:00:00 | 123456789 | car_mid | 2020-03-25T16:40:30 2020-03-26T12:00:00 | 123456789 | car_start | 2020-03-25T16:30:30 2020-03-26T12:00:00 | 123456788 | car_end | 2020-03-25T15:50:30 2020-03-26T12:00:00 | 123456788 | car_mid | 2020-03-25T15:20:30 2020-03-26T12:00:00 | 123456788 | car_start | 2020-03-25T14:50:30 Which Is a consistent pattern with each transaction having a start, mid and end with a different UID per transaction (also different vehichles for other transactions). I currently group them into transactions using the following search command. * | transaction UUID startswith="car_start" endswith="car_end" Which groups the transactions showing how many there were in the last X length of time (could be hundreds/thousands in a day. I need to get the duration of each transaction using the actual_important_log_time field and then use these values to get the average time the car transaction took. (this will then go in a dashboard
I would like to get alert if it exceeds threshold eg: Datafsused >=50 Log print: Mar 26 16:12:05 127.0.0.1 fs_used_percentage_stats: Datafsused=43 Commitlogfsused=21 Backupfsused=81
I created a custom alert action in Splunk Enterprise. When I try to use that action in ITSI for a correlated search, I don't see it as an option. How do I utilize my customer alert action inside ITSI?