All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AL3Z , do you want to remove all the events from the input or only the selectes part of the events? If you want to remove all the events, you could use a simple regex to blacklist them: Data N... See more...
Hi @AL3Z , do you want to remove all the events from the input or only the selectes part of the events? If you want to remove all the events, you could use a simple regex to blacklist them: Data Name\=\'ParentProcessName\'\>C:\\Program Files\\(Windows Defender Advanced Threat Protection\\MsSense\.exe)|(Windows Defender Advanced Threat Protection\\SenseIR\.exe)|(AzureConnectedMachineAgent\\GCArcService\\GC\\gc_worker\.exe)|(Rapid7\\Insight Agent\\components\\insight_agent\\3\.2\.5\.31\\ir_agent\.exe) you can check this regex at https://regex101.com/r/9lsjyz/1. Ciao. Giuseppe
Hello all, We have a Splunk alert that searches for high temperature events on Juniper routers, it's a very straight forward search:   index=main CHASSISD_FRU_HIGH_TEMP_CONDITION OR CHASSISD_OVER_... See more...
Hello all, We have a Splunk alert that searches for high temperature events on Juniper routers, it's a very straight forward search:   index=main CHASSISD_FRU_HIGH_TEMP_CONDITION OR CHASSISD_OVER_TEMP_SHUTDOWN_TIME OR CHASSISD_OVER_TEMP_CONDITION OR CHASSISD_TEMP_HOT_NOTICE OR CHASSISD_FPC_OPTICS_HOT_NOTICE OR CHASSISD_HIGH_TEMP_CONDITION OR (CHASSISD "Temperature back to normal") NOT UI_CMDLINE_READ_LINE     I'd like this Splunk alert to ignore temperature alarm events on the host router4-utah when FPC 11 = FPC: MPC5E 3D 24XGE+6XLGE @ 11/*/* is running hot, the events always come in the following order within 25 seconds of each other:   The alarm trigger events:   Sep 27 05:26:00 re0.router4-utah chassisd[7726]: CHASSISD_BLOWERS_SPEED_FULL: Fans and impellers being set to full speed [system warm] Sep 27 05:26:00 re0.router4-utah alarmd[7895]: Alarm set: Temp sensor color=YELLOW, class=CHASSIS, reason=Temperature Warm Sep 27 05:26:00 re0.router4-utah craftd[7730]: Minor alarm set, Temperature Warm Sep 27 05:26:00 re0.router4-utah chassisd[7726]: CHASSISD_HIGH_TEMP_CONDITION: Chassis temperature over 60 degrees C (but no fan/impeller failure detected) Sep 27 05:26:02 re0.router4-utah chassisd[7726]: CHASSISD_SNMP_TRAP6: SNMP trap generated: Over Temperature! (jnxContentsContainerIndex 7, jnxContentsL1Index 12, jnxContentsL2Index 0, jnxContentsL3Index 0, jnxContentsDescr FPC: MPC5E 3D 24XGE+6XLGE @ 11/*/*, jnxOperatingTemp 91)     The alarm clear events:   Sep 27 05:26:21 re0.router4-utah alarmd[7895]: Alarm cleared: Temp sensor color=YELLOW, class=CHASSIS, reason=Temperature Warm Sep 27 05:26:21 re0.router4-utah craftd[7730]: Minor alarm cleared, Temperature Warm     The goal is to keep the normal temperature alert running as it always has, but somehow ignore the host router4-utah when it triggers and clears temperature alarms on FPC11. I think the easiest way to say this is any temp alarm that triggers and clears on router4-utah that is surrounded within 25 seconds of this line:   Sep 27 05:26:02 re0.router4-utah chassisd[7726]: CHASSISD_SNMP_TRAP6: SNMP trap generated: Over Temperature! (jnxContentsContainerIndex 7, jnxContentsL1Index 12, jnxContentsL2Index 0, jnxContentsL3Index 0, jnxContentsDescr FPC: MPC5E 3D 24XGE+6XLGE @ 11/*/*, jnxOperatingTemp 91)     Any assistance one can provide is much appreciated! Thanks.
For my particular use case, I want to compare the difference between the count of fields extracted in windows event logs before versus after I install the Splunk TA for Windows on my search head.  I... See more...
For my particular use case, I want to compare the difference between the count of fields extracted in windows event logs before versus after I install the Splunk TA for Windows on my search head.  It fits my use case, but it might not for others if you have inconsistent configurations across your search head peers, for example.
To tell you the maximum value of the year value of an event over time. As to whether this provides useful information, that's another story...  
That's unlikely to give you a good representation of fields that actually are part of the data associated with a sourcetype, for example if you just run this on index=_audit index=_audit | stats las... See more...
That's unlikely to give you a good representation of fields that actually are part of the data associated with a sourcetype, for example if you just run this on index=_audit index=_audit | stats last(*) AS * by sourcetype | foreach * [ eval <<FIELD>>=if("<<FIELD>>" == "sourcetype", sourcetype, 1), fields=mvappend(fields, "<<FIELD>>") ] | addtotals | fields Total sourcetype fields you will get a load of fields associated with audittrail - on 3 different search heads, I get from 149 to 323 fields, most of which are just pulled in due to TAs installed on the search heads.  
I'm ten years late, so I hope it's not urgent  But here's a solution you can run inline: #base search here | stats last(*) AS * by sourcetype | foreach * [ eval <<FIELD>>=if("<<FIELD>>... See more...
I'm ten years late, so I hope it's not urgent  But here's a solution you can run inline: #base search here | stats last(*) AS * by sourcetype | foreach * [ eval <<FIELD>>=if("<<FIELD>>" == "sourcetype", sourcetype, 1)] | addtotals | fields Total sourcetype  
Your table shows user2 that authenticated less than 30 days after creation, so do you want this in the output? What does "Authentications since created (After 31 days)" in your table as user2 has a p... See more...
Your table shows user2 that authenticated less than 30 days after creation, so do you want this in the output? What does "Authentications since created (After 31 days)" in your table as user2 has a positive value, but the last date within 30 days. If you're looking to find users who were created 31 days ago, but have not logged in since, then you would use this type of search, where you need to work out what is a login event and what is a created event so you can determine the logic for event_is_login in the examplke below.   index=duo earliest=-31d@d latest=@d INCLUDE_CREATED_EVENTS_AND_LOGIN_EVENTS | eval created=if(actionlabel="added user" AND _time < relative_time(now(), "-3d@d"), _time, 0) | where created=1 OR event_is_login | stats count(eval(if(event_is_login), 1, null()))) as Logins max(eval(if(event_is_login), _time, null()))) as LastLogin max(created) as created_time by object | rename object AS User | eval LastLogin=strftime(LastLogin, "%m/%d/%Y")   What state in your data indicates that the user was created, is it actionlable="added user"?    
What is your SSL server cert configured as? Can you read the output of that cert using openssl? (if using the default cert)   openssl x509 -in /opt/splunk/etc/auth/server.pem -text -noout    
index=botsv1 sourcetype="stream:http" | timechart max(date_year)
What SSO platform are you using? And is your licensing set properly for all hosts?
Can you post the query and show what the results were?
If you have values that contain spaces or other punctuation characters, then you should ensure the tokens quote the values, which is normally done with | stats values($value2|s$)... i.e. adding |s ... See more...
If you have values that contain spaces or other punctuation characters, then you should ensure the tokens quote the values, which is normally done with | stats values($value2|s$)... i.e. adding |s before the final $ sign, that tells Splunk to quote the token value. However, in the above, your values() statements are expecting FIELD NAMES, containing values, whereas the value of the $value2$ token will be the clicked duration, and so values($value2$) is meaningless. What are you actually trying to do with that stats statement - is it just to debug things?  Sometimes when debugging token things, it's useful to just add an html panel inside the dashboard where you can render the tokens, e.g. <row> <panel> <html> <h2>name1=$name1|s$</h2> <h2>value1=$value1|s$</h2> </html> </panel> </row>  
Hi Everyone, after i select the source type i am getting below error while using ingest actions. I had to update the pass4symmkey as ingest actions required to setup custom pass4symmkey Connection ... See more...
Hi Everyone, after i select the source type i am getting below error while using ingest actions. I had to update the pass4symmkey as ingest actions required to setup custom pass4symmkey Connection testing failed in all remote clients: [https://*.*.*.*:8089]. This can be caused by misconfiguration of secret key or event capture is not supported in those remote splunk instances.   ANy idea what is happening?
I have 3 queries , i want to combine to one query so that i can use it for alert Query1: index=error-data  sourcetype=error:logs  source=https://error:appliocation.logs "logs started"   "tarnsac... See more...
I have 3 queries , i want to combine to one query so that i can use it for alert Query1: index=error-data  sourcetype=error:logs  source=https://error:appliocation.logs "logs started"   "tarnsaction recevied" [|inputlookup append=t  errorlogs.csv where error=2 |fields host |format] |stats count as "initial error logs " Query2: index=error-data  sourcetype=error:logs  source=https://error:appliocation.logs " timeouterror" AND "failed logs confirmed " [|inputlookup append=t  errorlogs.csv where error=2 |fields host |format] |stats count as "logs in transactions " Query3: index=error-data  sourcetype=error:logs  source=https://error:appliocation.logs " application logs continuted" [|inputlookup append=t  errorlogs.csv where error=2 |fields host |format] |stats count as "total failed"
Hi @bowesmana , i tried the suggested query buts its not working.
Hi Berfomet96, Can you try below line breaker regex: LINE_BREAKER = ([\r\n]+)\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}   Also your TIME_PREFIX and TIME_FORMAT do not seem to match as eventtime is an epo... See more...
Hi Berfomet96, Can you try below line breaker regex: LINE_BREAKER = ([\r\n]+)\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}   Also your TIME_PREFIX and TIME_FORMAT do not seem to match as eventtime is an epoch timestamp. 
I tried converting the _time field as suggested with help of one of solutions provided earlier by you (Solved: Re: convert date to epoch - Splunk Community). But no luck. Can you please help with... See more...
I tried converting the _time field as suggested with help of one of solutions provided earlier by you (Solved: Re: convert date to epoch - Splunk Community). But no luck. Can you please help with the query? Did you consult Date and time format variables when you try that solution?  The solution is provided for that particular format.  In your case, it would be something like  strptime(_time, "%FT%H:%M:%S.%Q%:z") _time field looks something like "2023-09-06T18:30:00.000+00:00" in the lookup CSV. Whereas in the results generated by the query it looks like "2023-09-06 18:30:00" If you have control over this lookup file, rename the _time field to something else like "time" instead.  Splunk does some funny things when it sees _ as the first character of a field name.  This causes more confusion than it is worth.  In your case, Splunk is trying to interpret the field as an internal field and gives its best shot at presentation, but internally, it is still represented as string.  This causes your chart command to not have time axis.  It is best to reserve _fieldname for Splunk's internal use.
@bowesmana ,   I've created the tokens via Drilldown Editor.    However when I try using the tokens in a panel, ie.  | stats values($value2$), values($trellis_split$), values($trellis_value$), val... See more...
@bowesmana ,   I've created the tokens via Drilldown Editor.    However when I try using the tokens in a panel, ie.  | stats values($value2$), values($trellis_split$), values($trellis_value$), values($row_axis_name$), values($row_fieldname$), values($d_trellis_split$), values($d_trellis_value$), values($d_trellis_name$), values($d_value1$),   I'm only seeing name1 & trellis_name.  Everything else is blanked.    
hi  Want to blacklist them on inputs as I left with only three  blacklist space.
I would like help with creating the following. Search when account was created and return a list of users who have not authenticated 30 days after account was created. I have a search to show detai... See more...
I would like help with creating the following. Search when account was created and return a list of users who have not authenticated 30 days after account was created. I have a search to show details for a particular user, but I would like to create a list of all users and set an alert if not authenticated after 30 days. index=duo object=<user1> OR username=<user1> | eval _time=strftime(_time,"%a, %m/%d/%Y %H:%M") | table _time, object, factor, action, actionlabel, new_enrollment, username | rename object AS "Modified User", username AS "Actioned By" | sort _time desc   So if actionlabel="added user' exists, I would like to return new_enrollment=false   Object(actionlabel=added user) = username(new_enrollment=false)   Here's how the output I'm searching for    User Created Authentications since created (After 31 days) Last Authentication user1 7/25/2023 0   user2 7/27/2023 3 8/19/2023