All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  using logs i am generating some stats that are needed to track the performance of my app on daily basis using the below query.  search ...| rex "elapsedTime=(?<ElapsedTime>.*?),\s*MLTime" | re... See more...
Hi,  using logs i am generating some stats that are needed to track the performance of my app on daily basis using the below query.  search ...| rex "elapsedTime=(?<ElapsedTime>.*?),\s*MLTime" | rex "X\-ml\-timestamp\: (?<TimeStamp>.*?)\s*\n*X-ml-maxrows" | rex "X\-ml\-size\: (?<size>.*?)\s*\n*X-ml-page" | rex "X\-ml\-page\: (?<page>.*?)\s*\n*X-ml-count"  | rex "X\-ml\-elapsed\-time\: (?<MLelapsed>.*?)\s*\n*X-ml-timestamp" | stats max(size) AS Page_Size max(_time) AS End_Time min(_time) AS Start_Time max(page) as Pages count(page) AS Total_Pages max(ElapsedTime) AS Max_ElapsedTime min(ElapsedTime) AS Min_ElapsedTime avg(ElapsedTime) AS Avg_ElapsedTime max(MLelapsed) AS Max_MLElapsedTime min(MLelapsed) AS Min_MLElapsedTime avg(MLelapsed) AS Avg_MLElapsedTime | eval CASS_Date=strftime(Start_Time, "%Y-%m-%d") | eval CASS_Duration= (End_Time-Start_Time)/60 | eval End_Time=strftime(End_Time, "%Y/%m/%d %T.%3Q") | eval Start_Time=strftime(Start_Time, "%Y/%m/%d %T.%3Q") | table CASS_Date Start_Time End_Time CASS_Duration Page_Size Pages Total_Pages Max_ElapsedTime Min_ElapsedTime Avg_ElapsedTime Max_MLElapsedTime Min_MLElapsedTime Avg_MLElapsedTime can someone please help me to perform the same above for multiple days with single query instead of i manually collecting these stats on daily basis
I'd like to disable the default watchlists (default/getwatchlist.conf) as my client's system is air-gapped and instead use some internally sourced lists.  I've looked at the SPEC file for getwatchli... See more...
I'd like to disable the default watchlists (default/getwatchlist.conf) as my client's system is air-gapped and instead use some internally sourced lists.  I've looked at the SPEC file for getwatchlist.conf (README/getwatchlist.conf.spec) and there's no mention of a "disabled" setting. Nor is there any refenced to a disabled setting within the command's sourcecode (bin/getwatchlist.py). The simple workaround is to just delete the default/getwatchlist.conf but my preference would be to have a local/getwatchlist.conf in which I can sets disabled=true for each of the default watchlist stanzas and then add in my own stanzas.
Hi all, I am passing some data in JSON format to Splunk using curl. When i try to pass the URL it gives an error " nested brace in URL position 19". Not getting where went wrong even though all the ... See more...
Hi all, I am passing some data in JSON format to Splunk using curl. When i try to pass the URL it gives an error " nested brace in URL position 19". Not getting where went wrong even though all the braces are proper. Can anyone help me in this?
Please help to extract payload data from logs entries and extract the PlatformVersion and PlatformClient values. Need in python code. Log Entries:  "tracking~2015~526F3D98","2015:1302",164,1,"2022-... See more...
Please help to extract payload data from logs entries and extract the PlatformVersion and PlatformClient values. Need in python code. Log Entries:  "tracking~2015~526F3D98","2015:1302",164,1,"2022-02-07 11:10:08.744 INFO [threadPoolTaskExecutorTransformed5 - ?] saving event to log =core-server-event-tracking-api, payload={""PlatformVersion"":""6.34.36 - 4.18.6"",""PlatformClient"":""html""},53 "tracking~2015~526F3D98","2015:130",164423,1,"2022-02-07 11:10:08.744 INFO [threadPoolTaskExecutorTransformed5 - ?] saving event to log =core-server-event-tracking-api, payload={""PlatformVersion"":""6.34.37 - 4.18.7"",""PlatformClient"":""xml""},54   Thanks
Hello All,  I have a lookup that is a saved as a schedule report that runs once a week.  This schedule report will get the new email addresses that were populated upon the search, then write the new... See more...
Hello All,  I have a lookup that is a saved as a schedule report that runs once a week.  This schedule report will get the new email addresses that were populated upon the search, then write the new email addresses to another lookup. The issue I have is that I get duplicates as this search runs once a week.  Is there a way I can avoid duplicates using outputlookup?  Dedup is not doing the trick... | inputlookup Stored_Email_lookups.csv | table Email, User_Id | rename User_Id as "New User" | dedup Email | outputlookup append=true "New_Incoming_Emails.csv"
I am attempting to configure Microsoft Graph Security API Add-On for Splunk (https://splunkbase.splunk.com/app/4564/#/details) Per the documentation, after installing the app you must configure it. ... See more...
I am attempting to configure Microsoft Graph Security API Add-On for Splunk (https://splunkbase.splunk.com/app/4564/#/details) Per the documentation, after installing the app you must configure it. Under the "Configuring Microsoft Graph Security data inputs" section it details the account information you need to enter (Account Name, Application ID and Client Secret registered). However, when I click Add (Configuration > Account) I'm prompted for Account name, Username, and Password. Not those other values. I've installed 1.2.3 and 1.2.4 and I see the same Add Account options in both. Is there another way I can configure those values? I'm running Splunk Enterprise 8.1.0.1 on Centos. Thanks, Rob
Hello, I'm new to Splunk and I was searching and trying many solutions before asking here, but I'm really stuck. I have my first assignment at work, there is a dashboard with a CASE clause comparin... See more...
Hello, I'm new to Splunk and I was searching and trying many solutions before asking here, but I'm really stuck. I have my first assignment at work, there is a dashboard with a CASE clause comparing dates- those dates are Canadian holidays. Currently, those dates are written in the SPL, but I need to replace them with something dynamically as a lookup. So I just created a lookup like this:   date,day 2022-01-03,New Year's Day 2022-01-17,Martin Luther King Jr. Day 2022-02-21,Family Day 2022-04-15,Good Friday 2022-05-23,Victoria Day 2022-05-30,Memorial Day 2022-06-20,Juneteenth National Independence Day 2022-07-01,Canada Day 2022-07-04,Independence Day 2022-08-01,Civic Holiday 2022-09-05,Labour Day 2022-10-10,Canada Thanksgiving Day 2022-11-24,US Thanksgiving Day 2022-12-26,Christmas Day 2022-12-27,Boxing Day     The Dashboard looks like this:     index=main_index sourcetype=main_sourcetype | eval secondDayOfMonth = strftime(strptime(StartTime, "%Y%m%d-%H:%M:%S"), "%Y-%m-%d") | eval CuttOffHour=case( secondDayOfMonth="2022-01-03" , 1, secondDayOfMonth="2022-01-17" , 2, secondDayOfMonth="2022-02-21" , 3, secondDayOfMonth="2022-04-15" , 4, secondDayOfMonth="2022-05-23" , 5, secondDayOfMonth="2022-05-30" , 6, secondDayOfMonth="2022-06-20" , 7, secondDayOfMonth="2022-07-01" , 8, secondDayOfMonth="2022-07-04" , 9, secondDayOfMonth="2022-08-01" , 10, secondDayOfMonth="2022-09-05" , 11, secondDayOfMonth="2022-10-10" , 12, secondDayOfMonth="2022-11-24" , 13, secondDayOfMonth="2022-12-26" , 14, secondDayOfMonth="2022-12-27" , 15, ) | table secondDayOfMonth CuttOffHour | rename secondDayOfMonth as "SECOND DAY OFF MONTH" CuttOffHour as "CUT OFF HOUR"     I was trying different solutions like:     | eval CuttOffHour=case( secondDayOfMonth=[inputlookup holydays_lookup] , 1, | eval CuttOffHour=case( secondDayOfMonth=search[inputlookup holydays_lookup] , 1,      And other 10 attempts always return an error. Help me please, I want to learn.
Dear Team, We are planning to use  "Salesforce Commerce Cloud Add-on for Splunk" We are looking for a sample reference for connecting strings via UI to an SFCC instance. Kindly guide us https://s... See more...
Dear Team, We are planning to use  "Salesforce Commerce Cloud Add-on for Splunk" We are looking for a sample reference for connecting strings via UI to an SFCC instance. Kindly guide us https://splunkbase.splunk.com/app/6098/#/details Thanks Sreeharinath   
I have team members that receive notifications when our environment is undergoing maintenance.  Should I be getting those?  What is an Operational Contact and should I be added as one?
I wrote a query that joins two sourcetypes from my Salesforce instance, counts a total number of events from midnight today to now, midnight yesterday to "the now time" yesterday, same for one week a... See more...
I wrote a query that joins two sourcetypes from my Salesforce instance, counts a total number of events from midnight today to now, midnight yesterday to "the now time" yesterday, same for one week ago, and one month ago, and then aggregates the counts by a breakdown of regions & locations. Query works great and gives me a table that's easy to read, but the stakeholder came back asking if I can add "delta columns" to the output, giving the difference between today & yesterday, today & 1wk ago, today & 1mo ago in their own columns. I've got this written with a stats command at the end, and can't figure out how to incorporate those differences since each column in the stats is a sum of values I'm flagging in the search. I'm not sure if there's a way to add it in the stats command, or if I should scrap the stats command and visualize this in another way (pivot table?). The time juggling is b/c Salesforce gives wonky time stamps. Join is b/c LoginEvent doesn't contain the fields I need to aggregate, which are on the User object (custom fields).   index=sfdc sourcetype=sfdc:LoginEvent earliest=-32d Status=Success | rename UserId as Id | join Id [search sourcetype=sfdc:User] | eval EventDateEpoch=strptime(EventDate,"%Y-%m-%dT%H:%M") | eval TodayMidnightEpoch=relative_time(now(), "@d+0") | eval TodayMidnightM1dEpoch=relative_time(now(),"-d@d") | eval TodayMidnightM7dEpoch=relative_time(now(),"-7d@d") | eval TodayMidnightMMoEpoch=relative_time(now(),"-1mon@d") | eval NowM1dEpoch=relative_time(now(), "-1d") | eval NowM7dEpoch=relative_time(now(),"-7d") | eval NowMMoEpoch=relative_time(now(),"-1mon") | eval EventToday=if((EventDateEpoch>=TodayMidnightEpoch) and (EventDateEpoch<=now()), "1" , "") | eval EventYesterday=if((EventDateEpoch>=TodayMidnightM1dEpoch) and (EventDateEpoch<=NowM1dEpoch), "1", "") | eval EventLastWeek=if((EventDateEpoch>=TodayMidnightM7dEpoch) and (EventDateEpoch<=NowM7dEpoch), "1", "") | eval EventLastMonth=if((EventDateEpoch>=TodayMidnightMMoEpoch) and (EventDateEpoch<=NowMMoEpoch), "1", "") | stats sum(EventToday) AS Today sum(EventYesterday) as Yesterday sum(EventLastWeek) as "Today -7d" sum(eventLastMonth) as "Today -1Mo" by State, Location
Hi, I have 4 powershell scripts I wrote for MSSQL servers, simple Invoke-Query PS command to query the database health state (in terms of database running queries resources usage, etc) and send the ... See more...
Hi, I have 4 powershell scripts I wrote for MSSQL servers, simple Invoke-Query PS command to query the database health state (in terms of database running queries resources usage, etc) and send the output as JSON to splunk. Those are really short scripts, and when running then manually from the server they run really fast. But when the scripts runs from the input:   script://runpowershell.cmd script_name.ps1​   It takes longer time for the scripts and the powershell.exe process to end (1-2 minutes), and during that period the CPU is on 100% (viewing live from Task Manager, when the 4 powershell.exe processes are first in list when ordering by CPU usage, high first) Can’t understand why. Notes: I use the common runpowershell.cmd script method to execute the powershell.exe with ExecutionPolicy Bypass flag to avoid errors running the script. — I’m aware of the SQL Server Add-on and the DB Connect method (I’ve took the SQL queries from the add-on templates, but I’m going to monitor hundreds of MSSQL Servers, and I didn’t want to configure hundreds of DB Connect connections and inputs for each server (the single HF is single point of failure for all MSSQL monitoring + performance + a lot of time to configure for hundreds of servers) So I’m converting the DB connect SQL templates queries to PS scripts to deploy from DS, so each MSSQL UF will run the query locally and will send the output as to Splunk.
Hi All, I am running a query and getting limited results in Statistics field (10,000). Earlier I was using the | sort command in the query, which was limiting the results to 10 K. but after removin... See more...
Hi All, I am running a query and getting limited results in Statistics field (10,000). Earlier I was using the | sort command in the query, which was limiting the results to 10 K. but after removing it I am getting full results, and now when I am running the same query via API, I am still getting the limited 10 k results in statistics. is this is a default configuration in splunk. Can I get full results running the query via API. Thanks, Sushant
I have two lookup files. My first lookup file has the columns: ip, host, dnsName. We will call it List1.csv The second lookup file has the columns: subnet, site, location. (with the subnets having ... See more...
I have two lookup files. My first lookup file has the columns: ip, host, dnsName. We will call it List1.csv The second lookup file has the columns: subnet, site, location. (with the subnets having CIDR notation). We will call this one List2.csv   I am trying to match up the items in List1 with their respective subnet from List2. However my search does not return any results.   If someone could assist me with my search, i'd appreciate it!       |inputlookup List1.csv |lookup List2.csv subnet output subnet site |where cidrmatch(subnet, ip) |table ip subnet dnsName          
I am looking to track the run times of analytics as well as create logs of the run times of the analytics in order to create a dashboard where you can see the run times of these saved searches as wel... See more...
I am looking to track the run times of analytics as well as create logs of the run times of the analytics in order to create a dashboard where you can see the run times of these saved searches as well as other relevant statistics (average length of time of search).  Any experience in doing this?
I have an app that the Upgrade Readiness App says is not passing due to an error. After digging into it more it looks like it's occurring because of the line contrib/jquery-2.1.0 in the file /appserv... See more...
I have an app that the Upgrade Readiness App says is not passing due to an error. After digging into it more it looks like it's occurring because of the line contrib/jquery-2.1.0 in the file /appserver/static/js/build/common.js If I submit the same app to AppInspect, it doesn't have any results in the errors/warnings or in the manual checks for that line in common.js. Really my question is, I believe common.js comes from the Add-on Builder. Is it okay to leave that file as is or do I just change the line referencing contrib/jquery-2.1.0 to use an updated version of jQuery?
Hi Community, I'm no Windows expert and just trying to tune an alert that we have in place. It's firing whenever a UF that has `admon` has stopped sending `admon` logs. But I just noticed that `adm... See more...
Hi Community, I'm no Windows expert and just trying to tune an alert that we have in place. It's firing whenever a UF that has `admon` has stopped sending `admon` logs. But I just noticed that `admon` logs can have similar `dcName` in multiple UFs. For example,  a UF that's meta host is "serverA" sends `admon` logs for dcName="serverDC". And UF that's meta host is "serverB" also sends `admon` logs for dcName="serverDC". Would it be reasonable to just replace the host of the UF with the value for dcName under the ActiveDirectory props.conf stanza?   Thanks.
Hello, I would like to ask about win log in XML format: Using Splunk, we collect Windows logs in XML format, because before indexing on Splunk, we modify and reduce them on Cribl - according to thi... See more...
Hello, I would like to ask about win log in XML format: Using Splunk, we collect Windows logs in XML format, because before indexing on Splunk, we modify and reduce them on Cribl - according to this document: Reducing Windows XML Events  It works fine, but now I would like to do one thing - convert values that are expressed in XML using numeric code to expressions in text form, as in the standard Windows log format. For example: In XML is: <Task>12544</Task> and corresponding value in text format of log is: TaskCategory=Logon So I tried to find conversion tables between text and XML format and  for elements <Opcode> <Keywords> <Level> I found some. But I cannot find any for element <Task> Do you know anyone about some? (Or for other XML elements as well)? If so, you can share it with me? It will be really appreciated. Best regards Lukas Mecir
I get the following after upgrading to Splunk 8.2.4 on Splunk Ent. + ES. I have a large environment with clustered SHs & Indexers. Thank u for your reply in advance. "Sum of 3 Highest per-cpu iowai... See more...
I get the following after upgrading to Splunk 8.2.4 on Splunk Ent. + ES. I have a large environment with clustered SHs & Indexers. Thank u for your reply in advance. "Sum of 3 Highest per-cpu iowaits reached red Threshold of 15" on ES "Maximum per-cpu iowait reached yellow Threshold of 5" on  Search heads What do they mean & How do I fix the issues please.  
I have a search that is based on two events types - admin_login and admin_change.  Admin_Login has two fields that the admin_change does not.  Those fields are "admin_login_name" and "admin_login_ema... See more...
I have a search that is based on two events types - admin_login and admin_change.  Admin_Login has two fields that the admin_change does not.  Those fields are "admin_login_name" and "admin_login_email."  The fields, other than those two previously mentioned, are the same.  What I'm looking to do: If event_type=admin_login then admin_login_name = source_user_name AND admin_login_email = source_user_email. Thanks in advance for the help.
Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_... See more...
Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=100 yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=200 today_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=355 today_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=401 The number of events in the last 24 hours would be partition 0(355-200=155), partition 1(401-200=201) Sum of partitions for topic(mytopic) = 155+201=356 There will be many topicName(s) and possible different numbers of partition(s) per topicName. Can I use splunk to calculate the numbers of events per partition and topic  since yesterday?