All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For anyone else like me in the future trying to get this to work, the solution from @ITWhisperer is for use in a dashboard. You should be able to get this to work outside a dashboard like so:  |... See more...
For anyone else like me in the future trying to get this to work, the solution from @ITWhisperer is for use in a dashboard. You should be able to get this to work outside a dashboard like so:  | inputlookup test.csv | map search="| makeresults | map search=\"$magic$\""
Hi @ITWhisperer , Sorry didn't get you. I see total 267 events matched out of 85k events. I am not sure if this answers your question
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="n... See more...
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="nobody;abc example alert (NONPRD)", search_type="scheduled", search_streaming=0, user="myself@myself.com", app="abc", savedsearch_name="example (NONPRD)", priority=default, status=success, digest_mode=1, durable_cursor=0, scheduled_time=1707514800, window_time=-1, dispatch_time=xxxxxxxx, run_time=0.884, result_count=2, alert_actions="email", sid="scheduler_xxxxxxxxxx", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="standard_perf"   However, i've received email alert but not slack alert, is there anyway to debug why the slack alert was not sent when there are two alert actions? How to know when the webhook URL is correct and working? Can someone please provide the complete steps to troubleshoot issues like this? Thank you! T
@EPitch  Do you mean if the sum of count is > 10 or if the number of distinct name/ip/id combinations is more than 10? If the former, then if you put a  | head 11 after your search, I believe it w... See more...
@EPitch  Do you mean if the sum of count is > 10 or if the number of distinct name/ip/id combinations is more than 10? If the former, then if you put a  | head 11 after your search, I believe it will speed up the search - although it will probably process the query data fully, it will only retain max 11 results, so then if you have stats count and the count is 11 then you have more than 10.  
@interrobang ok, got it The easiest thing to do is to add the following to your dropdown search after the dedup Servername | appendpipe [ stats values(Servername) as Servername | format | ren... See more...
@interrobang ok, got it The easiest thing to do is to add the following to your dropdown search after the dedup Servername | appendpipe [ stats values(Servername) as Servername | format | rename search as Servername | eval name="All" | eval order=0 ] | sort order Servername | fields - order what this simply does is to add a new row at the end with all the server names and creates a new name field. This will either have the Servername or "All". The purpose of the order is to sort the All to the top and then the servers in sorted order. Set the fieldForValue to be Servername and the fieldForLabel to be name. Then if you select All, it will have the Servername=A OR ..  See this example to see how it's working | makeresults | fields - _time | eval Servername=split("ABCD","") | mvexpand Servername | eval name=Servername | eval Servername="Servername".Servername | appendpipe [ stats values(Servername) as Servername | format | rename search as Servername | eval name="All" | eval order=0 ] | sort order Servername | fields - order  
Hi @abhi04, To filter with the where or search commands at the end of the pipeline, try the untable command instead of the transpose command:   | mstats rate_avg(abc*) as abc* where index=def span... See more...
Hi @abhi04, To filter with the where or search commands at the end of the pipeline, try the untable command instead of the transpose command:   | mstats rate_avg(abc*) as abc* where index=def span=3m | untable _time instance MessagesRead | eval MessagesRead=round(MessagesRead, 0) | where ...    
Thereare no miracles. If they are showing in the Forwarder Management section on a server different than your designated DS, they must have been pointed there somehow. Check your deployment server de... See more...
Thereare no miracles. If they are showing in the Forwarder Management section on a server different than your designated DS, they must have been pointed there somehow. Check your deployment server definition on your forwarders.
Hi All, I am using a mstats for a mteric and I am evaluating my hour and minute field something like below:   | mstats rate_avg(abc*) prestats=false WHERE "index"="def" span=3m | rename rate_avg(... See more...
Hi All, I am using a mstats for a mteric and I am evaluating my hour and minute field something like below:   | mstats rate_avg(abc*) prestats=false WHERE "index"="def" span=3m | rename rate_avg(* as *, *) as * | eval Date=strftime(_time,"%m/%d/%Y") | eval hour=strftime(_time,"%H") | eval minute=strftime(_time,"%M") | transpose column_name=instance | rename "row 1" as MessagesRead | eval MessagesRead=ROUND(MessagesRead,0) | where MessagesRead < 1 Now I am unable to to use the below filter condition | search NOT (instance="*xyz*" AND hour=09 AND (minute>=00 AND minute<=15))     as I dont want to alert for a particular instance only from 9 to 9:15, but it should alert for other instance for this time period.   Now before the transpose the instance does not exist and I cant use the filter and after transpose I am unable to filter on hour and minute.   Can u please help in filtering after transpose?
Thank you for your answer.
First, install and configure Splunk Supporting Add-on for Active Directory on your search head or search head cluster. If you're using Splunk Cloud, you'll need connectivity to a directory replica, i... See more...
First, install and configure Splunk Supporting Add-on for Active Directory on your search head or search head cluster. If you're using Splunk Cloud, you'll need connectivity to a directory replica, i.e. a domain controller, through a cloud-to-cloud private link or some other connection. Note that Splunk does not index the Object DN field value correctly when renderXml = false (sourcetype=WinEventLog). There is a missing comma between the first and second RDNs. This: DN: CN={CFD494B1-9D7F-448B-AF8F-3B7B3ABF1AA8}CN=POLICIES,CN=SYSTEM,DC=EXAMPLEDOMAIN,DC=LOCAL should be this: DN: CN={CFD494B1-9D7F-448B-AF8F-3B7B3ABF1AA8},CN=POLICIES,CN=SYSTEM,DC=EXAMPLEDOMAIN,DC=LOCAL with a comma between CN={CFD494B1-9D7F-448B-AF8F-3B7B3ABF1AA8} and CN=POLICIES. We can fix the extracted DN field in our search: | eval DN=replace(DN, "(?i)\\}CN=", "},CN=") Assuming Splunk Supporting Add-on for Active Directory is configured and working, we can add the ldapfetch command to our search to fetch additional LDAP attributes using the DN field value: | ldapfetch dn=DN attrs="displayName" If a group policy object is deleted, it will no longer be in the directory, and ldapfetch will not return a displayName. To allow for those cases, you can schedule a search to periodically fetch group policy objects and store attributes in a lookup file: | ldapsearch search="(objectClass=groupPolicyContainer)" attrs="whenChanged,distinguishedName,displayName" basedn="CN=Policies,CN=System,DC=EXAMPLEDOMAIN,DC=LOCAL" | eval _time=strptime(whenChanged, "%Y-%m-%d %H:%M:%S%z") | table _time distinguishedName displayName | sort 0 - _time | inputlookup append=true group_policy_object_lookup | dedup distinguishedName | outputlookup group_policy_object_lookup Note that whenChanged is not a replicated attribute, and its value won't be precise. Its use here allows to store the most recent displayName value available from the directory server in our lookup. The lookup should be defined with case-sensitivity disabled (case_sensitive_match = false). With a lookup cache available, we can use the lookup command in place of ldapfetch: | lookup group_policy_object_lookup distinguishedName as DN output displayName While it's not relevant to Splunk, I like to note when I see it used that the .local TLD is reserved for use by multicast DNS. I normally use example.com for general purpose documentation, contoso.com for Microsoft documentation, and occasionally buttercupgames.com for Splunk documentation. ICANN recently proposed defining a private-use TLD, although not specifically .internal as many have reported. (I can only assume the reporters didn't actually read the proposal.) I hope the proposal is adopted!  
Hi @LearningGuy  In html code open bracket is missing highlited in RED and width value set accrodingly to keep left <html> <style> #DisplayPanel { width: 40% !important; font-size: 16px !i... See more...
Hi @LearningGuy  In html code open bracket is missing highlited in RED and width value set accrodingly to keep left <html> <style> #DisplayPanel { width: 40% !important; font-size: 16px !important; text-align: left !important; float: left; } </style> </html>
Hi Isoutamo, thanks for reply, I did not change anywhere just gui i changed password and did a restart from portainer for my docker, since testing it out in homelab enviroment,  I have documented ... See more...
Hi Isoutamo, thanks for reply, I did not change anywhere just gui i changed password and did a restart from portainer for my docker, since testing it out in homelab enviroment,  I have documented a video and log, if u r interested I can share. log : https://pastebin.com/6BHr0t93
Hi @Chirag812, Splunk manages retention on a per bucket basis. This means to freeze a bucket, the newest data in a particular bucket must be older than frozenTimePeriodInSecs. Normally all data hav... See more...
Hi @Chirag812, Splunk manages retention on a per bucket basis. This means to freeze a bucket, the newest data in a particular bucket must be older than frozenTimePeriodInSecs. Normally all data have close timestamps in a bucket. But if some of your sources send data using old timestamps, these data will go into the same bucket as the recent timestamps. This makes the bucket's oldest timestamp older than the others. That is why you see the above situations. Unfortunately, there is no method to fix this error until the newest data is older than frozenTimePeriodInSecs. To prevent this behavior in the future, you can check your data sources for problems below. - Always use healthy NTP servers for all your data sources to be sure they have correct timestamps - Check timestamp extraction problems and use TIME_PREFIX and TIME_FORMAT  settings to prevent getting the wrong part of the log as a timestamp. If there are some epoch-like patterns in your data Splunk could use this as a timestamp. You can use the below query to see the wrong timestamped events to fix. index=ABC earliest=1 latest=-63d  
Here is. If I use the CN name value with Powershell's Get-GPO cmdlet then it returns me the Display Name of the GPO but I want to get it from Splunk results. 02/10/2024 11:30:27 AM LogName=Secur... See more...
Here is. If I use the CN name value with Powershell's Get-GPO cmdlet then it returns me the Display Name of the GPO but I want to get it from Splunk results. 02/10/2024 11:30:27 AM LogName=Security EventCode=5136 EventType=0 ComputerName=DC-01.EXAMPLEDOMAIN.local SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=26135 Keywords=Audit Success TaskCategory=Directory Service Changes OpCode=Info Message=A directory service object was modified. Subject: Security ID: EXAMPLEDOMAIN\administrator Account Name: Administrator Account Domain: EXAMPLEDOMAIN Logon ID: 0x92B8F Directory Service: Name: exampledomain.local Type: Active Directory Domain Services Object: DN: CN={CFD494B1-9D7F-448B-AF8F-3B7B3ABF1AA8}CN=POLICIES,CN=SYSTEM,DC=EXAMPLEDOMAIN,DC=LOCAL GUID: CN={CFD494B1-9D7F-448B-AF8F-3B7B3ABF1AA8}CN=Policies,CN=System,DC=exampledomain,DC=local Class: groupPolicyContainer Attribute: LDAP Display Name: versionNumber Syntax (OID): 2.5.5.9 Value: 196611 Operation: Type: Value Added Correlation ID: {8eaedf1e-827a-4ee8-8118-2b8e0ddb1133} Application Correlation ID: -   
Hi @yk010123, You use below; | rex field=_raw "\[(?<connector>[^\|]+)" | stats count by connector
I have log entries that have the following format : [<connectorName>|<scope>]<sp> The following are examples of the connector context for a connector named "my-connector": [my-connector|worker] ... See more...
I have log entries that have the following format : [<connectorName>|<scope>]<sp> The following are examples of the connector context for a connector named "my-connector": [my-connector|worker] [other-connector|task-0] [my-connector|task-0|offsets] I would like to extract the name of the connectors and build stats. The tasks or other metadata are not needed. For example : Connector Count my-connector 2 other-connector 2   As the entries have different formats, how can I do this?
Dears,        After upgraded Splunk from 9.1.2 version to 9.2.0 version, the deployment server not showing the clients, but Splunk receiving logs from clients, and also the client agents showing on ... See more...
Dears,        After upgraded Splunk from 9.1.2 version to 9.2.0 version, the deployment server not showing the clients, but Splunk receiving logs from clients, and also the client agents showing on all Splunk servers under setting --> Forwarder Managment except Deployment server, I don't know how that occurred, I didn't change anything. Kindly your support for that.   Best Regards, 
I believe this app or associated links to the app have been compromised. Consider removing it from the Splunkbase See Virustotal links below http[:]//emergingthreats[.]net https://www.virusto... See more...
I believe this app or associated links to the app have been compromised. Consider removing it from the Splunkbase See Virustotal links below http[:]//emergingthreats[.]net https://www.virustotal.com/gui/url/5232edc39848e69279fee041a84db6fb5bd0f9fff35f448392bbb56e242b0662 https://www.virustotal.com/graph/embed/gc54e4c8b7f474be6832766fdef4f5643aa60c68a16ee410fa54f99e4f6ca1b5b?theme=dark <iframe src="https://www.virustotal.com/graph/embed/gc54e4c8b7f474be6832766fdef4f5643aa60c68a16ee410fa54f99e4f6ca1b5b?theme=dark" width="700" height="400"> </iframe>
It is not being used as a token - try atmnumber=$atm_token$
Hi @faiq1999, The ObjectDN data element should contain the distinguished name of the object. If that's not present in your data, can you provide a deidentified example of the event?