All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are getting an alert that "Maximum Custom Metric Limit is reached" for Databases. What does it means? Did we ignore this alert?
Hi guys, I'm trying to use Splunk App for Infrastructure bu I've an issue : some entities are in inactive status although receiving data (confirmed in "Analysis" view) but sometimes go back to act... See more...
Hi guys, I'm trying to use Splunk App for Infrastructure bu I've an issue : some entities are in inactive status although receiving data (confirmed in "Analysis" view) but sometimes go back to active and so on. As most of the "sometimes inactive status" entities are receiving data every 5m (not production hosts), I wonder if there is a parameter to tune the detection of pseudo inactivity ? Thanks for your help Francois
I am trying to write a search for juniper firewall logs. Where I want to get alert if any user consume bandwidth more than 500 Mb in last one hour. I have desired logs in my Splunk like bytes_in and ... See more...
I am trying to write a search for juniper firewall logs. Where I want to get alert if any user consume bandwidth more than 500 Mb in last one hour. I have desired logs in my Splunk like bytes_in and bytes_out. Kindly suggest. index=main sourcetype="juniper:junos:firewall" host="10.10.10.1" | stats sum(bytes_out) AS TotalSent, sum(bytes_in) AS TotalRcvd by src | eval TotalMB=round((TotalSent+TotalRcvd)/1024/1024,2) | table src TotalMB
Hello, I'm new to Splunk and I am trying to send some alerts to MS Teams. My alert runs every 5 minutes. I already installed the Microsoft Teams Webhook Alert Connector & Microsoft Teams Ale... See more...
Hello, I'm new to Splunk and I am trying to send some alerts to MS Teams. My alert runs every 5 minutes. I already installed the Microsoft Teams Webhook Alert Connector & Microsoft Teams Alerts in my Splunk Enterprise. I created a webhook in my MS Teams and added that to my Alert in Splunk although I'm still not receiving anything. On the other hand, I was able to get alerts from the Triggered Alerts. Anything I missed on doing? Thank you in advanced for any help!
I have query like below index="us_west_prod_power_platform" sourcetype="spark:metric" metricName="HRTBT_LHIST_METRIC_DD" host="emr-prod-distributor" osm_zone_id |timechart span=10m count | eval d... See more...
I have query like below index="us_west_prod_power_platform" sourcetype="spark:metric" metricName="HRTBT_LHIST_METRIC_DD" host="emr-prod-distributor" osm_zone_id |timechart span=10m count | eval ds_count = if(count >= "1","0","1") |timechart span=10m values(ds_count) In that "osm_zone_id " is filter ,I want that osm_zone_id is one of the field of search ,something like below. index="us_west_prod_power_platform" sourcetype="spark:metric" metricName="HRTBT_LHIST_METRIC_DD" host="emr-prod-distributor" osm_zone_id |timechart span=10m count | eval ds_count = if(count >= "1","0","1") |timechart span=10m values(ds_count)| table osm_zone_id,time,ds_count Kindly suggest us.
Hello dear, I want to compare stats count for same host and counts are not equal than create a new field and put "!" or whatever. Hostname | Interface | Status | count | Alert Scenario 1 ( c... See more...
Hello dear, I want to compare stats count for same host and counts are not equal than create a new field and put "!" or whatever. Hostname | Interface | Status | count | Alert Scenario 1 ( clear, no alert ) HostA | InterfaceA | InterfaceA-up | 8 HostA | InterfaceA | InterfaceA-down | 8 | Scenario 2 (Alert) HostA | InterfaceA | InterfaceA-up | 8 HostA | InterfaceA | InterfaceA-down | 9 | !!!!!!!!!!!!!!! Regards.
Hi Splunkers, Ideally what happens is we set threshold for log file and set some retention. so files do get create like : audit.log audit.log.1 audit.log.2 audit.log.3 audit.log.4 ... See more...
Hi Splunkers, Ideally what happens is we set threshold for log file and set some retention. so files do get create like : audit.log audit.log.1 audit.log.2 audit.log.3 audit.log.4 After reaching threshold, what happens is audit.log.4 gets off, and audit.log.3 becomes audit.log.4, similarly audit.log.2 becomes audit.log.3. What i expect is, not only the last log file i.e, audit.log.4 should get off, but all the read files that is audit.log.1,2,3,4 should get off and all the new files should get create. This i want, because we are forwarding the logs to QRadar, and this is creating duplication at Qradar, as one file is ingesting to Qradar 4 times, with same content different name. TIA,
I was trying to install the Anomali ThreatStream Community App but get the following errors: "snapshot download failed. reason=optic_client> Failed to retrieve snapshot url: cannot find the snapsh... See more...
I was trying to install the Anomali ThreatStream Community App but get the following errors: "snapshot download failed. reason=optic_client> Failed to retrieve snapshot url: cannot find the snapshot" how do I fix it? (I'm using the ThreatStream OnPrem product.)
I'm trying to bring in new data to my Splunk standalone and getting this error in the _internal logs Incorrect path to script: /Applications/Splunk/etc/apps/my_app/local/bin/run_it.sh. Script must... See more...
I'm trying to bring in new data to my Splunk standalone and getting this error in the _internal logs Incorrect path to script: /Applications/Splunk/etc/apps/my_app/local/bin/run_it.sh. Script must be located inside $SPLUNK_HOME/bin/scripts. Only thing is, the path to script the log shows, doesn't exist. The path in the log has /my_app/local/bin/run_it.sh but I've triple checked and my bin folder in my app is under /my_app/bin/run_it.sh. There is no bin folder under the local directory that it appears to be referencing. Is there something that I am missing when trying to get this data in? Thanks in advance
Hello, I'm searching doing a search in splunk for the "request_id" field. For example: request_id = "XXXXXXX" It returns data from 2 sources. I can do a dedup and get the last event and it has e... See more...
Hello, I'm searching doing a search in splunk for the "request_id" field. For example: request_id = "XXXXXXX" It returns data from 2 sources. I can do a dedup and get the last event and it has everything I need except for the duration field. Is there a way I can pass the duration field and the value to another event before running dedup? If yes, how can I do this in bulk? I have a subsearch with a table of request_id's. I use it to search for all events matching those request_id's. How can I make sure that for each individual request_id, the duration field is populated for all events? Thanks
I am trying to calculate the duration/timetaken between 2 strings in an event using transaction starts with and endswith and it is not giving the expected and the format is different, I wanted a simp... See more...
I am trying to calculate the duration/timetaken between 2 strings in an event using transaction starts with and endswith and it is not giving the expected and the format is different, I wanted a simple format with HH:MM:SS Example of event returned : [main] 19:28:06,435[batchLogId=, clientCode=] INFO org.ets.ereg.batch.common.listener.BaseJobListener.beforeJob(BaseJobListener.java:89) - Start : Before Job ************* ... ... ... ... [main] 20:05:07,411[batchLogId=15304309, clientCode=] INFO org.ets.ereg.batch.common.listener.BaseJobListener.afterJob(BaseJobListener.java:163) - End : After Job *********** My Requirement : I wanted to calculate my time taken or duration based on the timings in front of these . Between Start and End After Job. My Query is index="ereg-prod" source="jobs.*log" | transaction startswith="Start : Before Job" endswith="End : After Job" | rex field=source "/*/logs/job-(?\S+).log" I tried time chart and _time what is the exact way to get it. Any suggestions would be helpful.
Hi, I successfully configured the AWS Red Shift JDBC driver, I can connect to the database and run queries, but when I create a data entry, in the last step, after clicking the "finish" button it ... See more...
Hi, I successfully configured the AWS Red Shift JDBC driver, I can connect to the database and run queries, but when I create a data entry, in the last step, after clicking the "finish" button it shows the error message "Unable to process JSON" . This error only occurs when I define a rising column (I'm using int following the dbconnect documentation). Anyone else get past this? Thanks!
@to4kawa You have helped me a lot the past few weeks, lol you will probably answer this one too! So i have these two searches that give me the below single display numbers... I need to ma... See more...
@to4kawa You have helped me a lot the past few weeks, lol you will probably answer this one too! So i have these two searches that give me the below single display numbers... I need to make a new single display that subtracts the second value from the first value.. Normally this is easy, but the second value is based on a search that requires the host. My main goal is to get the "latest" results for each field (which i am doing in my current searches, maybe not so efficiently) then subtract field 1 from field two.... First search: index="sense_power_monitor" | head 1 | table usage_info.solar_w Result: Second search: index="homeautomationrtac" host="sunnyboy1_watts" | head 1 | rename instMag as SunnyBoy1Watts1 | eval SunnyBoy1Watts = if(SunnyBoy1Watts1 < 0, 0, SunnyBoy1Watts1) | table SunnyBoy1Watts Result: I need a search that will take the first and subtract the second giving me a third display of 147 W I know how to do that math in splunk, im hung up on how to get my starting numbers being two different indexes, and having to do the other math first... Any help would be appreciated.
Hello everyone, Is there a way to assign a password to the universal forwarder to prevent it from being uninstalled? or what options do i have? Thanks Regards
I'm hoping to get help. I have the below errors that are in the same event at in different lines. i would like to get the 1st column as Error, 2nd as count and 3rd as App. <Dsc>General Erro... See more...
I'm hoping to get help. I have the below errors that are in the same event at in different lines. i would like to get the 1st column as Error, 2nd as count and 3rd as App. <Dsc>General Error:CODE0001-3032-CODE000-Error Msg 1</Dsc> <RpBy AppCd="EFG"/> <Dsc>General Error:CODE0001-3032-CODE050-Error Msg 2</Dsc> <RpBy AppCd="XYZ"/> <Dsc>General Error: Error, ANYTHING</Dsc> <RpBy AppCd="ABCD"/> Error msg is always after "General Error:" I was able to get it but I want to add the App name in the 2nd line in the same event. rex field=_raw max_match=100 ""General Error:(?<error>[\`\~\:\-\{\}\[\]\;\'\""\*\&\%\$\#\@\!\(\)\^\\=\-\?\/\.\,\\/\w+\d+\s+]+)<\/Dsc>"" The app name in the second line within the double quote. The results should be: Error APP count 1. CODE0001-3032-CODE000-Error Msg 1 EFG 1 2. CODE0001-3032-CODE050-Error Msg 2 XYZ 1 3. Error, ANYTHING ABCD 1
I see 3 different apps from 3 different authors on splunkbase for Microsoft Windows Defender ATP ; which one is the one to use? Windows Defender ATP Modular Inputs TA: https://splunkbase.splunk.com... See more...
I see 3 different apps from 3 different authors on splunkbase for Microsoft Windows Defender ATP ; which one is the one to use? Windows Defender ATP Modular Inputs TA: https://splunkbase.splunk.com/app/4128/ TA for Microsoft Windows Defender: https://splunkbase.splunk.com/app/3734/ TA for Defender ATP hunting API: https://splunkbase.splunk.com/app/4623/ There is also this: REST API Modular Input: https://splunkbase.splunk.com/app/1546/ along with this: https://github.com/ThiruYadav/Configure-Splunk-to-pull-Windows-Defender-ATP-alerts/blob/master/Configuration Obviously, I would like to use the "best" one; the "easiest" one or the one that is most-current or best-supported. How can I tell which one that is? An installation/user guide would be great, too. This is for Common Information Model with Enterprise Security .
I'm working on a financial data dashboard, and i have a few panels that pull data from last year relative to this year (now). Question 1. So.. I'm trying to get a sum from the current week num... See more...
I'm working on a financial data dashboard, and i have a few panels that pull data from last year relative to this year (now). Question 1. So.. I'm trying to get a sum from the current week number last year. They want to see sales data for the week last year, and compare that to sales data for the current week. I see where I can get "week number" as a field |my base search | eval weeknumber=strftime(_time,"%U") What id like to be able to do is basesearch earliest=-1y,weeknumber17@w0 latest=-1y+current_#_of_days_in_this_years_week17 Question 2. Is there a Splunk earliest=currentfiscalyear latest=now or do I have to construct something that will always identify February,1st regardless of the year. Or.. am I stuck entering earliest="2/1/2020:00:00:00" and just setting a reminder to edit the search once a year? Should I just define them in times.conf and then call them from the search? If so.. what might that look like?
To access my saved searches via Splunk API, is it a must to include the SID? I only ask because the saved search is on a schedule and the SID changes after each time the search runs. Are there any ... See more...
To access my saved searches via Splunk API, is it a must to include the SID? I only ask because the saved search is on a schedule and the SID changes after each time the search runs. Are there any other options to query the saved search without the use of the SID?
Hello everyone , How to use two depends in same dashboard panel if i want to use i want to use here both the *panel id for font-siz * and **panel depends for dropdown in same panel can any one ... See more...
Hello everyone , How to use two depends in same dashboard panel if i want to use i want to use here both the *panel id for font-siz * and **panel depends for dropdown in same panel can any one help me out** **# My XML code <form theme="dark"> <label>Resize panal</label> <fieldset submitButton="false"> <input type="dropdown" token="setid"> <label>sst</label> <fieldForLabel>Area Nmae Details</fieldForLabel> <fieldForValue>Area Nmae Details</fieldForValue> <search> <query>| inputlookup raj100|table ApName "Area Nmae Details"</query> <earliest>0</earliest> <latest></latest> </search> </input> </fieldset> <row> <panel depends="$alwaysHideCSS$"> <html> <style> #myTableStyle{ font-size: 70% !important; } </style> </html> </panel> <panel id="myTableStyle"> <panel depends="setid"> **#i want to use here both the (panel id for **font-siz** and panel depends for **dropdown token**) in same panel #** <table> <search> <query>| inputlookup raj100 FERD=$setid$|table ApName "Area Nmae Details" "Area CP Name" CLevel Date "Issue Description" "MD Name" PinID "Recommended Fix" "SC Title Name" Srate Task Title URL</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">4</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>
Hello all, I recently upgraded the Microsoft Azure Add-on TA to version 2.1. Not only did it break the configuration, but there are also some added permissions that need to be applied on the Azur... See more...
Hello all, I recently upgraded the Microsoft Azure Add-on TA to version 2.1. Not only did it break the configuration, but there are also some added permissions that need to be applied on the Azure portal side. I worked with someone on our Windows AD team who has the necessary access but he did not see what is referenced below in the details of the Add-on. https://splunkbase.splunk.com/app/3757/ Microsoft Azure Active Directory Sign-ins Microsoft Graph Read all audit log data Windows Azure Active Directory "(Application) Read directory data (Delegated) Read directory data" Microsoft Azure Active Directory Users Microsoft Graph Read all audit log data Windows Azure Active Directory "(Application) Read directory data (Delegated) Read directory data" Microsoft Azure Active Directory Audit Microsoft Graph Read all audit log data Windows Azure Active Directory "(Application) Read directory data (Delegated) Read directory data" These are the errors in the internal logs. Any ideas? 04-28-2020 15:58:03.014 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_audit.py" ERROR401 Client Error: Unauthorized for url: https://graph.microsoft.com/beta/auditLogs/directoryAudits? $orderby=activityDateTime&$filter=activityDateTime+gt+2020-04-21T15:58:02.316173Z+and+activityDateTime+le+2020-04-28T19:51:02.559995Z