All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have team members that receive notifications when our environment is undergoing maintenance.  Should I be getting those?  What is an Operational Contact and should I be added as one?
I wrote a query that joins two sourcetypes from my Salesforce instance, counts a total number of events from midnight today to now, midnight yesterday to "the now time" yesterday, same for one week a... See more...
I wrote a query that joins two sourcetypes from my Salesforce instance, counts a total number of events from midnight today to now, midnight yesterday to "the now time" yesterday, same for one week ago, and one month ago, and then aggregates the counts by a breakdown of regions & locations. Query works great and gives me a table that's easy to read, but the stakeholder came back asking if I can add "delta columns" to the output, giving the difference between today & yesterday, today & 1wk ago, today & 1mo ago in their own columns. I've got this written with a stats command at the end, and can't figure out how to incorporate those differences since each column in the stats is a sum of values I'm flagging in the search. I'm not sure if there's a way to add it in the stats command, or if I should scrap the stats command and visualize this in another way (pivot table?). The time juggling is b/c Salesforce gives wonky time stamps. Join is b/c LoginEvent doesn't contain the fields I need to aggregate, which are on the User object (custom fields).   index=sfdc sourcetype=sfdc:LoginEvent earliest=-32d Status=Success | rename UserId as Id | join Id [search sourcetype=sfdc:User] | eval EventDateEpoch=strptime(EventDate,"%Y-%m-%dT%H:%M") | eval TodayMidnightEpoch=relative_time(now(), "@d+0") | eval TodayMidnightM1dEpoch=relative_time(now(),"-d@d") | eval TodayMidnightM7dEpoch=relative_time(now(),"-7d@d") | eval TodayMidnightMMoEpoch=relative_time(now(),"-1mon@d") | eval NowM1dEpoch=relative_time(now(), "-1d") | eval NowM7dEpoch=relative_time(now(),"-7d") | eval NowMMoEpoch=relative_time(now(),"-1mon") | eval EventToday=if((EventDateEpoch>=TodayMidnightEpoch) and (EventDateEpoch<=now()), "1" , "") | eval EventYesterday=if((EventDateEpoch>=TodayMidnightM1dEpoch) and (EventDateEpoch<=NowM1dEpoch), "1", "") | eval EventLastWeek=if((EventDateEpoch>=TodayMidnightM7dEpoch) and (EventDateEpoch<=NowM7dEpoch), "1", "") | eval EventLastMonth=if((EventDateEpoch>=TodayMidnightMMoEpoch) and (EventDateEpoch<=NowMMoEpoch), "1", "") | stats sum(EventToday) AS Today sum(EventYesterday) as Yesterday sum(EventLastWeek) as "Today -7d" sum(eventLastMonth) as "Today -1Mo" by State, Location
Hi, I have 4 powershell scripts I wrote for MSSQL servers, simple Invoke-Query PS command to query the database health state (in terms of database running queries resources usage, etc) and send the ... See more...
Hi, I have 4 powershell scripts I wrote for MSSQL servers, simple Invoke-Query PS command to query the database health state (in terms of database running queries resources usage, etc) and send the output as JSON to splunk. Those are really short scripts, and when running then manually from the server they run really fast. But when the scripts runs from the input:   script://runpowershell.cmd script_name.ps1​   It takes longer time for the scripts and the powershell.exe process to end (1-2 minutes), and during that period the CPU is on 100% (viewing live from Task Manager, when the 4 powershell.exe processes are first in list when ordering by CPU usage, high first) Can’t understand why. Notes: I use the common runpowershell.cmd script method to execute the powershell.exe with ExecutionPolicy Bypass flag to avoid errors running the script. — I’m aware of the SQL Server Add-on and the DB Connect method (I’ve took the SQL queries from the add-on templates, but I’m going to monitor hundreds of MSSQL Servers, and I didn’t want to configure hundreds of DB Connect connections and inputs for each server (the single HF is single point of failure for all MSSQL monitoring + performance + a lot of time to configure for hundreds of servers) So I’m converting the DB connect SQL templates queries to PS scripts to deploy from DS, so each MSSQL UF will run the query locally and will send the output as to Splunk.
Hi All, I am running a query and getting limited results in Statistics field (10,000). Earlier I was using the | sort command in the query, which was limiting the results to 10 K. but after removin... See more...
Hi All, I am running a query and getting limited results in Statistics field (10,000). Earlier I was using the | sort command in the query, which was limiting the results to 10 K. but after removing it I am getting full results, and now when I am running the same query via API, I am still getting the limited 10 k results in statistics. is this is a default configuration in splunk. Can I get full results running the query via API. Thanks, Sushant
I have two lookup files. My first lookup file has the columns: ip, host, dnsName. We will call it List1.csv The second lookup file has the columns: subnet, site, location. (with the subnets having ... See more...
I have two lookup files. My first lookup file has the columns: ip, host, dnsName. We will call it List1.csv The second lookup file has the columns: subnet, site, location. (with the subnets having CIDR notation). We will call this one List2.csv   I am trying to match up the items in List1 with their respective subnet from List2. However my search does not return any results.   If someone could assist me with my search, i'd appreciate it!       |inputlookup List1.csv |lookup List2.csv subnet output subnet site |where cidrmatch(subnet, ip) |table ip subnet dnsName          
I am looking to track the run times of analytics as well as create logs of the run times of the analytics in order to create a dashboard where you can see the run times of these saved searches as wel... See more...
I am looking to track the run times of analytics as well as create logs of the run times of the analytics in order to create a dashboard where you can see the run times of these saved searches as well as other relevant statistics (average length of time of search).  Any experience in doing this?
I have an app that the Upgrade Readiness App says is not passing due to an error. After digging into it more it looks like it's occurring because of the line contrib/jquery-2.1.0 in the file /appserv... See more...
I have an app that the Upgrade Readiness App says is not passing due to an error. After digging into it more it looks like it's occurring because of the line contrib/jquery-2.1.0 in the file /appserver/static/js/build/common.js If I submit the same app to AppInspect, it doesn't have any results in the errors/warnings or in the manual checks for that line in common.js. Really my question is, I believe common.js comes from the Add-on Builder. Is it okay to leave that file as is or do I just change the line referencing contrib/jquery-2.1.0 to use an updated version of jQuery?
Hi Community, I'm no Windows expert and just trying to tune an alert that we have in place. It's firing whenever a UF that has `admon` has stopped sending `admon` logs. But I just noticed that `adm... See more...
Hi Community, I'm no Windows expert and just trying to tune an alert that we have in place. It's firing whenever a UF that has `admon` has stopped sending `admon` logs. But I just noticed that `admon` logs can have similar `dcName` in multiple UFs. For example,  a UF that's meta host is "serverA" sends `admon` logs for dcName="serverDC". And UF that's meta host is "serverB" also sends `admon` logs for dcName="serverDC". Would it be reasonable to just replace the host of the UF with the value for dcName under the ActiveDirectory props.conf stanza?   Thanks.
Hello, I would like to ask about win log in XML format: Using Splunk, we collect Windows logs in XML format, because before indexing on Splunk, we modify and reduce them on Cribl - according to thi... See more...
Hello, I would like to ask about win log in XML format: Using Splunk, we collect Windows logs in XML format, because before indexing on Splunk, we modify and reduce them on Cribl - according to this document: Reducing Windows XML Events  It works fine, but now I would like to do one thing - convert values that are expressed in XML using numeric code to expressions in text form, as in the standard Windows log format. For example: In XML is: <Task>12544</Task> and corresponding value in text format of log is: TaskCategory=Logon So I tried to find conversion tables between text and XML format and  for elements <Opcode> <Keywords> <Level> I found some. But I cannot find any for element <Task> Do you know anyone about some? (Or for other XML elements as well)? If so, you can share it with me? It will be really appreciated. Best regards Lukas Mecir
I get the following after upgrading to Splunk 8.2.4 on Splunk Ent. + ES. I have a large environment with clustered SHs & Indexers. Thank u for your reply in advance. "Sum of 3 Highest per-cpu iowai... See more...
I get the following after upgrading to Splunk 8.2.4 on Splunk Ent. + ES. I have a large environment with clustered SHs & Indexers. Thank u for your reply in advance. "Sum of 3 Highest per-cpu iowaits reached red Threshold of 15" on ES "Maximum per-cpu iowait reached yellow Threshold of 5" on  Search heads What do they mean & How do I fix the issues please.  
I have a search that is based on two events types - admin_login and admin_change.  Admin_Login has two fields that the admin_change does not.  Those fields are "admin_login_name" and "admin_login_ema... See more...
I have a search that is based on two events types - admin_login and admin_change.  Admin_Login has two fields that the admin_change does not.  Those fields are "admin_login_name" and "admin_login_email."  The fields, other than those two previously mentioned, are the same.  What I'm looking to do: If event_type=admin_login then admin_login_name = source_user_name AND admin_login_email = source_user_email. Thanks in advance for the help.
Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_... See more...
Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=100 yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=200 today_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=355 today_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=401 The number of events in the last 24 hours would be partition 0(355-200=155), partition 1(401-200=201) Sum of partitions for topic(mytopic) = 155+201=356 There will be many topicName(s) and possible different numbers of partition(s) per topicName. Can I use splunk to calculate the numbers of events per partition and topic  since yesterday?
Hi, I have a dashboard that works out the percentage of builds that complete inside SLO.   I would like to be able to compare this month's results to those of last month and show a trend line as to... See more...
Hi, I have a dashboard that works out the percentage of builds that complete inside SLO.   I would like to be able to compare this month's results to those of last month and show a trend line as to how much this percentage has improved (or not).  I appreciate that 'count' and timechart are normally used to show the trendline on a single value visualisation.  Is there a solution that allows a trendline from a single value that is achieved via eval? Many thanks
Hi all, I have an authorize.conf located in an application, which is usually deployed via Deployer to SH members. There is also an authorize.conf at our system/local directory ( Created by GUI). ... See more...
Hi all, I have an authorize.conf located in an application, which is usually deployed via Deployer to SH members. There is also an authorize.conf at our system/local directory ( Created by GUI). Currently these two files are bit interlinked now. Example : A role is created at App level ( deployed via deployer) , but some inheritance/reference was added via GUI, hence this entry gets logged in system/local Is there any way to identify if the role is present in system/local OR in the app??
Hello I have events that include a field of username ( and of course _time) .I would like to count how many users were added each month, but there are times with no new users were created.  I can f... See more...
Hello I have events that include a field of username ( and of course _time) .I would like to count how many users were added each month, but there are times with no new users were created.  I can find the first appearance of each user using Stats min(_time) by username And then I can use timechart to count new users by month and streamstats to get the cumulative sum. I have found how to fill the gaps if there were no new users during a month m by using the makecontinues command. what i didn't figure yet is how to fill the period before the first user creation and since the last time a user was created , until today . ... | timechart span=1mon count(username) as users | makecontinues spam=1mon _time | fillnull | streamstats sum(users) as com thanks for the help
I'd like to report an incomplete transform of RegistryValueData in Splunk_TA_microsoft_sysmon v1.0.1 Now it looks like: [sysmon-registryvaluedata] REGEX = <Data Name='Details'>\w+\s\((.+)\)</Data>... See more...
I'd like to report an incomplete transform of RegistryValueData in Splunk_TA_microsoft_sysmon v1.0.1 Now it looks like: [sysmon-registryvaluedata] REGEX = <Data Name='Details'>\w+\s\((.+)\)</Data> FORMAT = RegistryValueData::$1 So it works fine when Details contains: DWORD (0x00000001) But when it is a string value, it doesn't make sense.  What about this transform? [sysmon-registryvaluedata] REGEX = <Data Name='Details'>(?:([^(^)]*)|\w+\s\((.+)\))</Data> FORMAT = RegistryValueData::$1
Hi , I have to get the below fields extracted from these three logs to create visulisation: Fields i am interested: Event_log type,originator_username,object,username,destination,bucket_name,time,ty... See more...
Hi , I have to get the below fields extracted from these three logs to create visulisation: Fields i am interested: Event_log type,originator_username,object,username,destination,bucket_name,time,type   I have written this regex to create parser but i am not getting all the fields while writing base serach: ^(?:[^ \n]* ){2}(?P<event_log>\w+\s+[a-z_-]+)(?:[^ \n]* ){2}\{"originator_username"\:(?P<originator_username>.[a-z]+")\,"object"\:(?<object>.[a-z]+)[^,\n]*,"extra"\:\{(?P<extra>.[a-z]+)":[^,\n]*(?:[^,\n]*,){6}"time"\:(?P<time>\w+),(?:[^,\n]*,){2}"type"\:(?<type>.[a-z_]+[a-z])"}   2022-01-23 10:19:47,140 WARNING event_log EventLog: {"originator_username":"abc","object":"cluster","extra":{"username":"admin"},"object_type":"cluster","originator_uid":0,"time":164287087,"throttled_event_count":1,"obj_uid":null,"type":"failed_authentication_attempt"} 2022-01-23 07:24:05,479 INFO event_log EventLog: {"originator_username":"abcef","object":"bdb:1","extra":{"destination":{"bucket_name":"dbabucket","type":"s3","subdir":"radar2","filename":""}},"object_type":"bdb","originator_uid":0,"time":164767765,"throttled_event_count":1,"obj_uid":"1","type":"backup_succeeded"} 2022-01-23 07:15:00,294 INFO event_log EventLog: {"originator_username":"adminstrator","object":"bdb:1","object_type":"bdb","originator_uid":0,"time":1642788100,"throttled_event_count":1,"obj_uid":"1","type":"backup_started"}   Can anyone help me what neededd to be fix in regex so i can get all the needed field extracted for base search.
Hello, I need to limit the number of value shown in a multivalue column in a dashboard. This was possible using advanced xml using this option: <module name="SimpleResultsTable"> ... See more...
Hello, I need to limit the number of value shown in a multivalue column in a dashboard. This was possible using advanced xml using this option: <module name="SimpleResultsTable"> <param name="allowTransformedFieldSelect">True</param> <param name="count">10</param> But I fail to see how to do it in simple xml now that advanced xml is deprecated it should look like something like this  There is a way to do this using classic dashboard on newer version of splunk that no longer support advanced xml?
Hello, I have the next query to get data grouped by month by software version using  condition "where"     index=tst | spath path="Check"{} output=Num | where isnotnull(Num) | timechart dc(run.ID... See more...
Hello, I have the next query to get data grouped by month by software version using  condition "where"     index=tst | spath path="Check"{} output=Num | where isnotnull(Num) | timechart dc(run.ID) span=1mon by version | addtotals     I'm wondering how to display data grouped by month by version with condition "where isnotnull(Num)" with ratio of such number of events by total?  tried to do this that way.   | dedup run.ID | eventstats count(eval(isnotnull(Num))) as cnt, dc(run.ID) as total by version | eval p=(cnt/total)*100 | timechart values(p) span=1mon by version  
We are ingesting logs from imperva SQS queue from our aws enviornement. We want to use custom sourcetype for this logs i.e  "imperva:incapsula" instead of default sourcetype "aws:s3:accesslogs"  on S... See more...
We are ingesting logs from imperva SQS queue from our aws enviornement. We want to use custom sourcetype for this logs i.e  "imperva:incapsula" instead of default sourcetype "aws:s3:accesslogs"  on Splunk add-on for AWS.  We have made changes via backend on inputs.conf and restarted service, these chnages are reflecting on UI as well as could see below events in internal logs stating inputs has tagged to the new  sourcetype. But the logs are still being indexed under old sourcetype i.e. aws:s3:accesslogs. Have tried multiple things like creating a new custom input with new sourcetype, created props.conf for the new sourcetype under system/local directory but it didn't helped, the logs are still being indexed under default sourcetype "aws:s3:accesslogs" Internal logs post making changes "2022-01-28 09:33:58,959 level=INFO pid=10133 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:run:635 | datainput="imperva-waf-log" start_time=1643362438 | message="Data input started." aws_account="SplunkProdCrossAccountUser" aws_iam_role="aee_splunk_prd" disabled="0" host="ip-172-27-201-15.ec2.internal" index="corp_imperva" interval="300" python.version="python3" s3_file_decoder="S3AccessLogs" sourcetype="imperva:incapsula" sqs_batch_size="10" sqs_queue_region="**-1" sqs_queue_url="https://***/aee-splunk-prd-imperva-waf" using_dlq="1"" props.conf [imperva:incapsula] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\CEF\:\d\| NO_BINARY_CHECK=true TIME_FORMAT=%s%3N TIME_PREFIX=start= MAX_TIMESTAMP_LOOKAHEAD=128 inputs.conf [aws_sqs_based_s3://imperva-waf-log] aws_account = SplunkProdCrossAccountUser aws_iam_role = aee_splunk_prd index = corp_imperva interval = 300 s3_file_decoder = S3AccessLogs #sourcetype = aws:s3:accesslogs sourcetype = imperva:incapsula sqs_batch_size = 10 sqs_queue_region = ***-1 sqs_queue_url = https://**/aee-splunk-prd-imperva-waf using_dlq = 1 disabled = 0 Does anyone have faced similar issue?