All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been having difficulty with this for a while and looking for some help.  I'm attempting to find users logging and whether they are using username/password or smart card.    I search for 4768 ... See more...
I've been having difficulty with this for a while and looking for some help.  I'm attempting to find users logging and whether they are using username/password or smart card.    I search for 4768 and return the user, ip, preauthentication type, and timestamp from indexA. Then eval the timestamp for each of those events to Logon_Timestamp. The search time is the last 30 days. I then search indexB for each result of the main search for action="Request" events around the same timeframe as the main search, get the timestamp, xuser, zuser, and then eval the timestamp for all of those events to Request_Time. I search indexB again for each result of the main search for action="Connect" events around the same timeframe as the main search, get the timestamp, xuser, xhost, and eval the timestamp for all of those events to Connect_Time. After getting all of that information and renaming Account_Name and xuser to Admin_User, I need to create a table joining all of them with the following columns: Logon_Timestamp, Logon_Method, Request_Time, Connection_Time, Admin_User, Requesting_User, ipaddress, Environment, hostname, group     I seem to be able to get this partially working using different methods like join, multisearch, and append but trying to join all of these events, renaming each of their timestamps, and joining them together by Admin_User is showing as quite a challenge.  Join gets the information I want, but matching up timestamps between the three events doesn't seem possible and it takes a while. Append seems to return most of what I need and I can organize it by making the two indexB queries subsearches, but I can't eval the timestamp fields for them and put them into their own columns. Multisearch looked really promising, but only the results of the first search are returned. I've had a good deal of success using stats and putting all of them into one search using OR, but once again I can't successfully eval the timestamp fields. Below is one of my 20 or so queries that I've been using that provides me with the closest results, but I can't get the event timestamps matched up to ones that occur around the same time for the user.   index=indexA sourcetype=WinEventLog:Security "EventCode=4768" ("-admin" OR "-service" OR "-user")(Pre_Authentication_Type != -) Client_Address != "10.x.x.x" Client_Address != "10.x.x.x" | eval Logon_Timestamp = strftime(_time, "%m-%d-%Y %H:%M:%S") | rename Account_Name as xuser | join max=0 type=left xuser [search index=indexB host="10.x.x.x" ("device=10.x.x.x" OR "device=10.x.x.x") action="Request" | rename zuser as "Requesting_User" |eval Request_Time = strftime(_time, "%m-%d-%Y %H:%M:%S")] | join max=0 type=left xuser [search index=indexB action="Connect" | eval Connection_Time = strftime(_time, "%m-%d-%Y %H:%M:%S")] | rename xuser as "Admin_User" | eval Admin_User=lower(Admin_User) | eval ipaddress=replace(Client_Address,"::ffff:","") | eval Logon_Method=case(Pre_Authentication_Type < 14, "Username/Password", Pre_Authentication_Type > 14, "PIV") | eval Request_Time = if(isnull(Request_Time), "No Request Found", Request_Time) | eval Connection_Time = if(isnull(Connection_Time), "N/A", Connection_Time) | lookup assetlist.csv ipaddress OUTPUTNEW Environment, hostname, group | where like(hostname, "%-%-%") AND !like(hostname, "ex-host-name-%") | table Logon_Timestamp, Logon_Method, Request_Time, Connection_Time, Admin_User, Requesting_User, ipaddress, Environment, hostname, group   Anyone have any experience with something like this?  
I am trying to assign a value to a parameter in a macro that is based on a calculation of a value being sent to the macro but I do not get the expected result. index=my_index ... earliest=exact($tim... See more...
I am trying to assign a value to a parameter in a macro that is based on a calculation of a value being sent to the macro but I do not get the expected result. index=my_index ... earliest=exact($time$-4000) latest=$time$... How can I assign the earliest value which suppose to be 4,000 seconds less than the value $time$ ?
Hello, I have a TA developed through the Splunk Add-On builder that has two different data collections configured. If I create an input instance, it only runs the first data collector that was added... See more...
Hello, I have a TA developed through the Splunk Add-On builder that has two different data collections configured. If I create an input instance, it only runs the first data collector that was added.  Both inputs have different fields available to them, so that doesn't really help, plus they need to run on different intervals. How do we configured the TA to allow for different input configurations based on a selectable input name? Thanks.
    I work at a utility and we have an index that contains SCADA events from the electric system. We have data that goes back to 2015.  There are a very large number of total events (1.8 billion or s... See more...
    I work at a utility and we have an index that contains SCADA events from the electric system. We have data that goes back to 2015.  There are a very large number of total events (1.8 billion or so).  I had an engineer trying to trend some voltages over a long time period and it was discovered that Splunk had removed all of the events before 8/1/2020. I cleaned the index and added enableTsidxReduction=false. I then cleaned and reloaded the index and it appears it has removed events prior to Jan 1 2017 this time. The total size of this index is only around 60GB, The SQL database we are loading it from is 100GB total, these events are only two tables. We use DB Connect with a rising column for loading. MSSQL to dedicated SCADA index. Two inputs, one for each table.     I would like for size to be only factor controlling when data leaves the index, I would also prefer for buckets to only be hot and warm, cold is on a much slower storage system and we have plenty of hot/warm space. what is the conf file settings that achieve this? I have found the spec for indexes.conf and it is very daunting, I have scrolled down through it and it is hard for me to understand what is the right settings to use.  Is there a guide somewhere that outlines the behavior and cotnrols for index data management? We run a distributed system with two indexers on 8.2.3 Thanks for the help. Lee.
Hello, Looks like the action field is not returning results for almost all of the indexes. This is only impacting one of the search heads, the action field is working normally in the other search he... See more...
Hello, Looks like the action field is not returning results for almost all of the indexes. This is only impacting one of the search heads, the action field is working normally in the other search heads ( NOT clustered ).    ex: index=foo ( returns all data ) but when i add index=foo action=allowed returns almost nothing     
Hello Splunk Community, I'm fairly new to splunk and am using it to search and alert me for testing failures in my manufacturing environment. I have a search in which I would like to match up two d... See more...
Hello Splunk Community, I'm fairly new to splunk and am using it to search and alert me for testing failures in my manufacturing environment. I have a search in which I would like to match up two different events and to get a search hit ONLY when both failures occured on the same order number. I have 3 primary fields I'll be using. OrderNum, adviseText, and testName. I want my search result to return the order number when all criteria are met. To me, logically this looks like ((adviseText = "Diagnostic Error" AND testName = "Test 1") AND (adviseText = "Diagnostic error" AND testName = "Test 2")). I've used this to test and got no results and I understand that it's because no single event matches both criteria. Many orderNums fail one or the other, but I need search to single out orderNums that fail both. Can anyone help me with this? Much appreciated.
Scenario on a SHC, Splunk 8.2.2.1 user1 and user2 are 2 users in role user user1 who is in role user owns a private extraction (and saved searches). she is leaving the company and wants user2 to n... See more...
Scenario on a SHC, Splunk 8.2.2.1 user1 and user2 are 2 users in role user user1 who is in role user owns a private extraction (and saved searches). she is leaving the company and wants user2 to now own the knowledge object admin does a reassign knowledge objects of all knowledge objects from user1 -> user2 (and yes they probably got the warning that this might make knowledge objects inaccessible) now no one including admin can access this knowledge object from the UI or curl .. /services/configs/conf-props/extractnamehere/acl Fortunately: the props.conf file in /opt/splunk/etc/users/user1/search/local/props.conf is still there Is there any other way the admin could gain access to this knowledge object other than grabbing the configs off the file system of the Splunk head?
Full disclosure, I'm on the Salesforce team in my org and working with our Splunk team, but we're having an issue getting an object to read in to Splunk. I've checked things on my side and they look ... See more...
Full disclosure, I'm on the Salesforce team in my org and working with our Splunk team, but we're having an issue getting an object to read in to Splunk. I've checked things on my side and they look correct, trying to find some other avenues to pursue on the Splunk side (and idiot check that my side is, indeed, set correctly). Salesforce object: LoginEvent Accessing this object requires either the Salesforce Shield or Salesforce Event Monitoring add-on subscription and the View Real-Time Event Monitoring Data user permission. https://developer.salesforce.com/docs/atlas.en-us.platform_events.meta/platform_events/sforce_api_objects_loginevent.htm We assign object access to our Splunk integration user via permission sets and I've confirmed that permset has the View Real-Time Event Monitoring Data flag = true. Is there anything else I can/should check on my side? I see no other access requirements for this object. My understanding is that the Splunk team isn't seeing any errors on their end, it's just not "doing" anything (not reading it in).
I am using Splunk Enterprise V8.2.3.2. I am trying to alert when a scheduled search becomes disabled. The problem is that I have four systems using the same app but with different searches enabled an... See more...
I am using Splunk Enterprise V8.2.3.2. I am trying to alert when a scheduled search becomes disabled. The problem is that I have four systems using the same app but with different searches enabled and disabled for each of the systems. I need to dynamically determine which system the alert is running on and get the corresponding list of searches that are supposed to be enabled from a lookup table. I have done that. Now I need to see if the disabled search name matches one of the search names in the lookup table list. List is like: Searches that should be enabled(fieldname searches):  apple tart,blueberry pie,carrot cake,cupcake Search found to be disabled(fieldname disabled): carrot cake I would like to do something like: eval failed=if(in(disabled,searches),"Failed","Passed") where disabled in(searches) or,  search disabled IN searches However, none of these approaches have worked. Any advice? Thanks in advance.      
I am very new to Splunk and trying to gain as much knowledge as possible. I found there is an App called Splunk Global Threat Lankscape/Ip Watch List which I installed but I am getting zero results. ... See more...
I am very new to Splunk and trying to gain as much knowledge as possible. I found there is an App called Splunk Global Threat Lankscape/Ip Watch List which I installed but I am getting zero results. I most definitely feel I should be seeing some type of results. Is anyone familiar with this app that can provide some feedback? 
Attempting to send events/incidents to ServiceNOW from Splunk.  We've completed all of the configuration steps on the SNOW side, and when we open up the SNOW app (inside Splunk Cloud) and try to add ... See more...
Attempting to send events/incidents to ServiceNOW from Splunk.  We've completed all of the configuration steps on the SNOW side, and when we open up the SNOW app (inside Splunk Cloud) and try to add the ServiceNOW account we get the message: "An error occurred while trying to authenticate.  Please try again." These are the log entries that are showing up in TA_Snow_Error_Output.  Has anyone seen this before and/or seen a way through it? 2022-01-14 19:27:00,053 ERROR pid=27053 tid=MainThread file=splunk_ta_snow_rh_oauth.py:handleEdit:106 | Error occurred while getting access token using auth code 2022-01-14 19:19:40,670 ERROR pid=17428 tid=MainThread file=splunk_ta_snow_account_validation.py:validate:119 | Failure occurred while verifying username and password. Response code=403 (Forbidden)
Hi all, I have a question about scheduling pdf delivery option of a dashboard. I have some daily scheduled dashboards for pdf delivery but they sometimes don't have results. I want to configure the p... See more...
Hi all, I have a question about scheduling pdf delivery option of a dashboard. I have some daily scheduled dashboards for pdf delivery but they sometimes don't have results. I want to configure the pdf delivery option to send email only on the days with results. Is there a way to do this? Thanks.
Splunk DB connect connection to AWS Athena. Does anyone used that yet?
Hello, My application will generate a daily log file with the file name App_YYYYMMDD.log.  Example App_20220118.log, App_20220119.log.   I am trying to write a query which should return a table with... See more...
Hello, My application will generate a daily log file with the file name App_YYYYMMDD.log.  Example App_20220118.log, App_20220119.log.   I am trying to write a query which should return a table with single column having value as '0',  if the file for current week day is not generated.  The query should also have an condition to  wait for certain time before returning '0'. For example, let's say query should wait for 8 hours from the start of the day (12 AM) before returning the result.  Can you please share if you have written similar query.   Regards, Syed
I am working on the query that generates a table with count of security violations. I want to filter our the users with violations greater than 10.    | rex field=_raw "(?<Message>Security\sviolati... See more...
I am working on the query that generates a table with count of security violations. I want to filter our the users with violations greater than 10.    | rex field=_raw "(?<Message>Security\sviolation)\s\S+\s\S+\s(?<User>[A-Z0-9]+)" | eval Time = strftime(_time, "%m-%d-%Y %H:%M:%S") | rename JOBNAME as Jobname Time as Date | eval Workload = substr(Jobname,1,3) | stats count(Message) as "Security Violations" by Jobname User   Resulting table User Security Violations ABC 1 DEF 4 GHI 12 JKL 3 XYZ` 20   Thank you,
I am actually new to splunk and trying to learn . Is there a way to group by the results based on a particular string. Although i found some of the answers here already, but its confusing for me. It ... See more...
I am actually new to splunk and trying to learn . Is there a way to group by the results based on a particular string. Although i found some of the answers here already, but its confusing for me. It will be really helpful if someone can answer based on my use case.  Below is the sample log that i am getting:   [2022-01-19T13:30:15.664+00:00] [odi] [ERROR] [ODI-1134] [oracle.odi.agent] [tid: 304720] [ecid: 0000NtmXuVIC^qqMwMmZMG1XqvzZ000cRu,0:68:129:176:208:135] [oracle.odi.runtime.MrepExtId: 1501670917734] [oracle.odi.runtime.AgentName: OracleDIAgent2] [oracle.odi.runtime.ExecPhase: ExecuteTask] [oracle.odi.runtime.OdiUser: _odi] [oracle.odi.runtime.WrepName: WORKREP] [oracle.odi.runtime.ScenarioName: WEB_BOOKINGS_MV] [oracle.odi.runtime.ScenarioVer: 001] [oracle.odi.runtime.LoadPlanName: wfl_WebDataSet_MV_Refresh_2TimesAD]   I want to group the results based on oracle.odi.runtime.LoadPlanName and find the latest log of it. Could someone please assist.
Could you help us in confirming whether Splunk REST APIs supports OAuth authentication apart from the existing basic authentication(username/password) and authentication tokens(link). ? We see a lot ... See more...
Could you help us in confirming whether Splunk REST APIs supports OAuth authentication apart from the existing basic authentication(username/password) and authentication tokens(link). ? We see a lot of customers enquiring about it.  Also, is it mandatory to always use an authentication token mechanism for a service account in Splunk or can we use a username/password as well ? @sloshburch 
We just upgraded from Enterprise Security 6.4.x to 6.6.2.  In version 6.4, we were able to run real-time searches, pause the search grab and work the notable.  We upgraded to 6.6.2 and now that featu... See more...
We just upgraded from Enterprise Security 6.4.x to 6.6.2.  In version 6.4, we were able to run real-time searches, pause the search grab and work the notable.  We upgraded to 6.6.2 and now that feature no longer exists.  I don't see an auto refresh option on version 6.6.2, so my question is how can I get the Incident Review dashboard to auto refresh so we don't have to do it manually?     Thanks in advance. Scott
Hi guys  I tried installing Splunk Phantom as an underprivileged user as per the documentation: https://docs.splunk.com/Documentation/SOARonprem/5.0.1/Install/InstallUnprivileged Although I pretty... See more...
Hi guys  I tried installing Splunk Phantom as an underprivileged user as per the documentation: https://docs.splunk.com/Documentation/SOARonprem/5.0.1/Install/InstallUnprivileged Although I pretty much get through the process without problems, when I get to the last step i get warnings about storage The installation does continue and then completes (i think) I then navigate to the ./bin directory and run the ./start_phantom.sh script but it gives me a connection to postgres error Postgres is installed so i dont know what the issue could be. Note this is a standalone instance of phantom Has anyone experienced something similar? Also I cannot access the frontend but I assume this is because phantom is not running   
Hi, I have installed an add-on to integrate splunk with opsgenie but it is not working. After finishing the set up, none of the functionality is working like the inputs or tenant pages. Add-on inst... See more...
Hi, I have installed an add-on to integrate splunk with opsgenie but it is not working. After finishing the set up, none of the functionality is working like the inputs or tenant pages. Add-on installed : https://splunkbase.splunk.com/app/3759/  Used this link for ref : https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-splunk/#:~:text=Opsgenie%20provides%20a%20two%2Dway,forward%20Splunk%20alerts%20to%20Opsgenie Has anyone integrated Splunk with Opsgenie? If yes, please help me with the set up. Thanks in advance