All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to create an alert when new QID from qualys is published.  For that I'm using FIRST_FOUND_DATETIME field and comparing it with today's date. The date format for that field result is in G... See more...
I would like to create an alert when new QID from qualys is published.  For that I'm using FIRST_FOUND_DATETIME field and comparing it with today's date. The date format for that field result is in GMT. I want in PST. Also whenever the FIRST_FOUND_DATETIME  is current date it should trigger the alert or list QIDs associated to today's date. 
Hello, I am trying to write a query that will display failed logins (Account_Name, Host, Count). First Query index=wineventlog EventCode=4625 | top limit=20 Account_Name host  | where count > 9 C... See more...
Hello, I am trying to write a query that will display failed logins (Account_Name, Host, Count). First Query index=wineventlog EventCode=4625 | top limit=20 Account_Name host  | where count > 9 Con 1. Displays "-" in some of the Account_Name fields  Pro 1. Displays all the count and host fields correctly. Second Query index=wineventlog EventCode=4625  | rex "(?ms)Account For Which Logon Failed.+?Account Name:\s+(?<Account_Name>\V+)" |top limit=30 Account_Name host| where count >=9 Con Displays all the Account_Name, count and host fields correctly but displays a lot less results on the table compared to the first query.   Pro Displays all the Account_Name, count and host fields correctly   I need a query that will displays all the Account_Name, count and host fields correctly as well as displays the same amount of results in the first query. Any help is appreciated. Thanks in advance.   
Hi, Currently, my query produces the correct results but they are all aggregated into single cells, and I would like to have them separated depending on the results found. What I would like is to h... See more...
Hi, Currently, my query produces the correct results but they are all aggregated into single cells, and I would like to have them separated depending on the results found. What I would like is to have "Offers/Redeemed/Take_Rate"  listed and calculated for each unique combination of results found for pointBank/merchant.   So: offers  Redeemed  Pointbank   Merchant   Take_Rate 2               1                    A                       A                 50 3               1                    A                       B                 33.3 6               3                    B                       A                 50 5               1                    B                       C                 20 My current query is: host="server" source="/home/xyz.log" earliest=-1@d latest=now | fields "promotionAction" "pointBankCode" "merchantCode"| search (promotionAction="*") pointBankCode="*" merchantCode="*" | stats count(eval(promotionAction= "OFFERED")) AS Offers count(eval(promotionAction= "ACCEPTED")) as Redeemed values(pointBankCode) as PointBank values(merchantCode) as Merchant | eval Take_Rate=((Redeemed)/(Offers)*100)
I'm trying to plot the following as a scatter chart: The y-axis should be the namespace. Namespace is a small set of strings, e.g. "default", "argo" or "kube-system". The x-axis is time. Each poi... See more...
I'm trying to plot the following as a scatter chart: The y-axis should be the namespace. Namespace is a small set of strings, e.g. "default", "argo" or "kube-system". The x-axis is time. Each point should be coloured either green or red depending on whether or not the workflow succeeded or failed. Problem 1 - you cannot have non-numeric x and y axis. Time does not appear to be numeric. So how do I convert my namespace to a number? I think it should be 0..N based on it's index is the values that namespace can be. Problem 2 - how to colour points? This is how far I have gotten so far: index=foo sourcetype=eventrouter host="event-router-*" source="foo/*" event.involvedObject.kind=Workflow (event.reason=WorkflowSucceeded OR event.reason=WorkflowFailed) | convert num(_time) as x | table event.metadata.namespace x event.reason  
  I could retrieve the list of the transactions as a single event below. Transactions start with "Dashboard Load:" and end with "Total Dashboard Load Time"   | rex "Account: (?<account>[^\,]*), Si... See more...
  I could retrieve the list of the transactions as a single event below. Transactions start with "Dashboard Load:" and end with "Total Dashboard Load Time"   | rex "Account: (?<account>[^\,]*), SiteID: (?<siteID>[^\s]*) - (?<duration_in_ms>[^\s]*) milliseconds on Requestor: (?<requestor>[^\,]*)" | transaction agent endswith="Total Dashboard Load Time" | eval duration_in_sec=round(avg(duration_in_ms)/1000,2) | table agent account siteID flow duration_in_sec     I could find the delta between each event as multiple events with below   | rex "Account: (?<account>[^\,]*), SiteID: (?<siteID>[^\s]*) - (?<duration_in_ms>[^\s]*) milliseconds on Requestor: (?<requestor>[^\,]*)" | streamstats current=f last(_time) as last_time2 by agent | rename _time as current_time2 | eval delta= last_time2 - current_time2 | eval last_time =strftime(last_time2, "%m/%d, %H:%M:%S.%6N") | eval current_time =strftime(current_time2, "%m/%d, %H:%M:%S.%6N") | table agent account siteID flow duration_in_sec last_time current_time delta     I'm looking to see if I can find a delta between events in a single transaction. The transaction starts with "Dashboard Load" and ends with "Total Dashboard Load Time" Expected Result Format:   agent, account, siteID, list of flow (all 5 Dashboard Load actions), duration_in_sec, list of event times, list of delta between event time   agent - the name of the agent account - account # siteID - site# list of flow - all action flows durations_in_sec - Total Dashboard Load Time list of event times - List showing all event time list of the delta between event time - List showing 4 delta times between each flow   Below are a sample of 2 data of 5 events   2021-12-16 05:30:43,834 alpha - Dashboard Load: User Clicked on New 2021-12-16 05:30:46,498 alpha - Dashboard Load: User Clicked on AccountSearch 2021-12-16 05:31:05,420 alpha - Dashboard Load: User Clicked on Search with String abcdef 2021-12-16 05:31:08,557 alpha - Dashboard Load: User clicked on Searched Result 123456 2021-12-16 05:31:12,234 alpha - Total Dashboard Load Time for Account: 000111222, SiteID: 123- 3438.0 milliseconds on Requestor: xoxoxo     
Previously, I had set up the SSO for our controllers. This time, I need to set up the SSO for accounts portal in azure. Is there any reference or documents on how to do this?
Lines in my sourcetype are not being picked up correctly at all.  Each event is being split into dozens of lines.  Also, when I go into the Settings in the UI for sourcetypes, I see all of the config... See more...
Lines in my sourcetype are not being picked up correctly at all.  Each event is being split into dozens of lines.  Also, when I go into the Settings in the UI for sourcetypes, I see all of the configs matching what I have set except for SHOULD_LINEMERGE = true.  This comes up as false.  I try resetting it in the UI and it still comes up as false even though that should not be set anywhere.  Btool shows it should be set to true, but it still comes up as false. Btool shows these settings [kube:container:applicationservice-app] BREAK_ONLY_BEFORE_DATE = true LINE_BREAKER = (\d{2}\:\d{2}\:\d{2}\.\d{3})(?:\s\[Thread) MAX_TIMESTAMP_LOOKAHEAD = 128 SHOULD_LINEMERGE = true TIME_FORMAT = %H:%M:%S.%Q  
I am working on a dashboard where my source have values like /opt/commands/abc.env, I want to print XYZ in ConfigType if my source contains commands. Right now I am using regex | rex field=source ... See more...
I am working on a dashboard where my source have values like /opt/commands/abc.env, I want to print XYZ in ConfigType if my source contains commands. Right now I am using regex | rex field=source "(?<ConfigType>commands)" and it's printing ConfigType=commands but I need to print ConfigType=XYZ
I have a dashboard with four pie charts across the top. I've set up drill downs for these that open up a new tab. However, I'm wondering if it is possible to put and events panel below them and depen... See more...
I have a dashboard with four pie charts across the top. I've set up drill downs for these that open up a new tab. However, I'm wondering if it is possible to put and events panel below them and depending upon the pie chart that is clicked dynamically build the SPL for the events panel. The four pie charts have very different searches and drill downs, so I cannot simply just populate some tokens for use with the events panel. I would need to actually dynamically build the SPL and pass that to the events panel. Is that even possible? Thanks.
It seems that the version 1.1.8 from Feb. 6, 2021still does not support Admin API v2 handlers for authentication logs according to URL: Does Duo's Splunk Connector support Admin API v2 handlers for ... See more...
It seems that the version 1.1.8 from Feb. 6, 2021still does not support Admin API v2 handlers for authentication logs according to URL: Does Duo's Splunk Connector support Admin API v2 handlers for authentication logs?  Are we supposed to use http://github.com/duosecurity/duo_log_sync/  to send to Splunk SIEM on 9997? But it seems that config.yml does not support sslPassword needed to write the logs to Splunk indexers? Could duo_splunkapp/bin/lib/duo_client 's files (client.py at version 4.1.0) be upgraded to the same version as the ones in duo_client-4.3.0-py3.7.egg/duo_client/client.py at version 4.3.0 ? Any other options or inputs?
Hi, I need to schedule an alert every 2 minute in between 8PM to 11PM in splunk cloud. Anyone could help please
Hi - I have a Splunk UF monitoring many directories on a rsyslog (receiver) server. One of the directories populated with logs as expected. However, the input stanza had the incorrect sourcetype a... See more...
Hi - I have a Splunk UF monitoring many directories on a rsyslog (receiver) server. One of the directories populated with logs as expected. However, the input stanza had the incorrect sourcetype and the data/logs did not index. Now after removing the sourcetype, I need to reset the UF to re-monitor the log files in that single directory "only".   I do NOT want to re-index everything the UF monitors. Please advise the best way to handle this... Thank you
Hi could you please give me an advice how to edit a call to the Splunk Rest API with the following parameter: search | inputlookup mylookup.csv The goal is to use another APP, not the user default ... See more...
Hi could you please give me an advice how to edit a call to the Splunk Rest API with the following parameter: search | inputlookup mylookup.csv The goal is to use another APP, not the user default one. I tried the following but it gave me same result: | inputlookup mylookup.csv | eval app_name = $env:MyNewApp$    Thanks a lot!
Hi -  Let's say you have a scheduled query / report that runs daily (at mid-night) looking over a time range of Last 24 hours.  And you summarize the results to index=summary_index_foo. There was a... See more...
Hi -  Let's say you have a scheduled query / report that runs daily (at mid-night) looking over a time range of Last 24 hours.  And you summarize the results to index=summary_index_foo. There was a "foo" data source outage for a couple days, however you were able to backfill the data to index=foo. What is the best to re-run the query without creating a lot of duplicates.   I am pretty sure if you use "collect" that will create duplicates. But will re-scheduling a one-time clone of the report over the outage days and summarizing results create duplicates if the time range overlaps into the data (before and after the outage)? In other words, the outage time frame was not to the minute, hour, or day exactly.  When you re-schedule/re-summarize the query will that create duplicates if the same data/event exists in the summary index for that time?    Or will Splunk drop duplicates when using the summary index?    I am guessing duplicates will still be created but need a sanity check. Thank you  
We were presented with a situation where non-admin users needed access to Splunk license data from the _internal index, though to whom we can not grant admin access/access to the _internal index. Th... See more...
We were presented with a situation where non-admin users needed access to Splunk license data from the _internal index, though to whom we can not grant admin access/access to the _internal index. The solution we came up with was to generate a new index and use a scheduled report/search using collect to move only license event to this new index and give access to the new index on a role-basis appropriate for these users. This did not work as the users with roles who should have read access to this index cannot see any events, only admin can and I am struggling to figure out why. This is the basic search used to copy events: index=_internal source=*license_usage.log type=Usage | collect index="collect_license" is scheduled to run every 15 min collecting data for the past 15 min. The report is set to Display for App with read access granted to Everyone. The job runs, collect does "copy" data to the new index where admin users can search the index. However, the other roles with access to the index cannot. We cannot see, or find any indications of restrictions on the "stash" sourcetype in the documentation, online or in our environment. Users have a role-based access to this new index though for some reason are not allowed access to events in the index turning up empty results when searching. So I hit a wall here,  I see no reason why the intended users lack access to events in the new index. The only thing left I can imagine is if there is any inherited property included when "extracting" and "copying" using collect maintaining some restriction from the source index. Any information and/or suggestions that could help solve this would be greatly appreciated    
I have created a dashboard with multiple tabs each for one application. the first tab contains the health status of all the apps. I want to be able to click on the status of the application in the fi... See more...
I have created a dashboard with multiple tabs each for one application. the first tab contains the health status of all the apps. I want to be able to click on the status of the application in the first tab which can open the respective tab of that application. since i just need to open the tab i am assuming i do not need to pass any tokens but please advise. Also while i load the dashboard for the 1st time i would like to have only the first tab load. the remaining tabs load on click directly on the tab or through the visualization on first tab. I have used li to have all the tabs listed. also used tabs.js & tabs.css for displaying the apps in each tab.@niketnilay
e.g how to get sum of below in single query sum(val_2) by application sum(val_2) by val_1 Query Result(single query) column1      column2 ABC              1478 FSD               4839 A   ... See more...
e.g how to get sum of below in single query sum(val_2) by application sum(val_2) by val_1 Query Result(single query) column1      column2 ABC              1478 FSD               4839 A                    5849 B                    478 or column1      column2  column3    column4 ABC              1478                A                5849 FSD               4839               B                 478 or what ever possible in single query Please help
Hello splunkers, i need to understand the best way to forward my data in multisite indexer cluster for Disaster Recovery management: For example, we have: On Site A 1 manager node (active) 3 pee... See more...
Hello splunkers, i need to understand the best way to forward my data in multisite indexer cluster for Disaster Recovery management: For example, we have: On Site A 1 manager node (active) 3 peer nodes [IDX_1A, IDX_2A, IDX_3A ] (active) 1 search head (active) 2 Heavy Forwarder [HF_1A, HF_2A] (active) On Site B 1 manager node (stand by) 3 peer nodes [IDX_1B, IDX_2B, IDX_3B ] (active) 1 search head (stand by) 2 Heavy Forwarder [HF_1B, HF_2B] (standy By) On HF_1A and HF_2A the outputs.conf have to configure to send data to: 1) ALL site A and site B indexers (IDX_1A, IDX_2A, IDX_3A, IDX_1B, IDX_2B, IDX_3B) we suppose that HF can comunicate with all OR 2) Only site A IDX? (IDX_1A, IDX_2A, IDX_3A) OR 3) Any other way? Thanks in advance
This one may not be a hard one, but I am asking because I dont know how to explain what I am doing thus not able to actually search the community posts to see if someone has already asked the questio... See more...
This one may not be a hard one, but I am asking because I dont know how to explain what I am doing thus not able to actually search the community posts to see if someone has already asked the question, but here it goes. What I am trying to do is create a search that will go back through the history for a particular field and find the last change to that data and also display the field of when that date was when it changed.  Its not something that I want running all the time because I am sure it will take time to search through millions of logs so its just going to end up as a dashpanel that can be searched when needed. example:  Host ABC used to have IP address 1.1.1.1.  the IP address is 2.2.2.2 and I want to be able to find the time and date that host ABC changed IP address.
HI All, I have a DB querry, need a help in date filter.    | dbxquery connection="ITDW" shortnames=true query="SELECT GETDATE() as 'CurrentTime', [Incident_Number], INC.[Company], [Customer], PPL... See more...
HI All, I have a DB querry, need a help in date filter.    | dbxquery connection="ITDW" shortnames=true query="SELECT GETDATE() as 'CurrentTime', [Incident_Number], INC.[Company], [Customer], PPL.Region as 'Customer_Region', PPL.Site_Group as 'Customer_Site_Group', PPL.Site as 'Customer_Site', [Summary], [Notes], [Service], [CI], [Impact], [Urgency], [Priority], [Incident_Type], [Assigned_Support_Group], [Assigned_Support_Organization], [Status], [Assignee], [Status_Reason], [Resolution], [Reported_Date], [Responded_Date], [Closed_Date], [Last_Resolved_Date], [Submit_Date], [Last_Modified_Date], [Owner_Group] FROM [shared].[ITSM_INC_MAIN] INC LEFT OUTER JOIN [shared].[ITSM_CMDB_People_Main] PPL ON INC.Customer_ID = PPL.Person_ID WHERE (([Assigned_Support_Group] = 'Ops-WAN')) AND [Submit_Date] BETWEEN DATEADD(D,-700,GETDATE()) AND GETDATE()"   I want some data which is old , filter only data that is of 2019, but when i search for more than 300 days. its not showing the accurate data.   how can i add a specific year or between date in the code.