All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Interesting Fields is just a GUI feature that shows fields present in at least 10 (15?) percent of events. Just because field is not listed there doesn't mean it's not being parsed out from the event... See more...
Interesting Fields is just a GUI feature that shows fields present in at least 10 (15?) percent of events. Just because field is not listed there doesn't mean it's not being parsed out from the event. Actually with renderXml=true you get xml-formatted events from which all fields should be automatically parsed.
Hi @Mojal  @marnall  I am facing the same issue with my Splunk Cluster. Were y'all able to find any workarounds/solutions? P.S: I have deployed the splunk cluster via splunk-operator in m... See more...
Hi @Mojal  @marnall  I am facing the same issue with my Splunk Cluster. Were y'all able to find any workarounds/solutions? P.S: I have deployed the splunk cluster via splunk-operator in my kubernetes environment.
Do you mean chained searches?
Amazing, worked like a charm.   Thanks!
This got me close enough to what I needed.  In my effort to streamline and reduce clutter I oversimplified the issue in my original post.  In any case though, thank you for the help!
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provid... See more...
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true    Admin, Autopilot, and Operational, were added the same way. I also added in props.conf   [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Autopilot] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Operational] rename = wineventlog     The data are coming in, however, none of the fields are parsed as interesting fields. Is there something I am missing? I looked through some of the other conf file, but I think I am in over my head to make a new section in props? I thought the base [WinEventLog] would take care of the basic breaking up of interesting fields like EventID, so I am a bit lost.
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Docu... See more...
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Documentation), but I can't seem to find relevant information for how to do this in the markdown for Dashboard Studio. Note: I am not attempting to use a savedSearch.
Is this still the case? I have an EC2 instance that has dynamic ips and I would like to set up a splunk forwarder. Am I still able to get the logs over to the correct data lake?
Thank you all for your help. I found the problem with my inputs.conf; it was right in the front of me but just didn't see it. In my inputs.conf, for some reason, I had a stanza "host = <indexer-name>... See more...
Thank you all for your help. I found the problem with my inputs.conf; it was right in the front of me but just didn't see it. In my inputs.conf, for some reason, I had a stanza "host = <indexer-name>".  So all logs were getting to the indexer but under my indexer name except /var/log/messages and cron; hence I wasn't "seeing" them. I need to check why those files (messages and cron) were coming under my real UF name,  maybe because, they probably have host name in the logs. The good part is, I learnt few new troubleshooting tips; thanks to you all, appreciate your help. 
Thanks for the clarification! I am running this by the team now...
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful log... See more...
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful login after one or more failed login attempts. I currently have the following search. The transaction command is necessary where it is or otherwise, all the events are split up into separate events of varying line counts. index=honeypot sourcetype=honeypotLogs | transaction sessionID | search "SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS"  Below is an example event. For clarity, I replaced details/omitted details from the logs below. [02] Tue 27Aug24 15:20:57 - (143323) Connected to 1.2.3.4 ... ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User "bob" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login Any tips on getting my search to find events like this? Currently I only have field extractions for the IP (1.2.3.4), user (bob), and sessionID (143323). I can possibly create a field extraction for the SSH2 messages but I don't know if that will help or not. Thanks!
In the Azure Splunk Enterprise Application, under Users and Groups, I add my Azure security groups that have the members that want access inside them. Then you go into the Single sign-on portion and ... See more...
In the Azure Splunk Enterprise Application, under Users and Groups, I add my Azure security groups that have the members that want access inside them. Then you go into the Single sign-on portion and review the Attributes and Claims. You should have an entry that says "groups" as the claim name. Under the value, it should be set to "Groups assigned to the application" and Source Attribute as "Group ID". This will send just the groups assigned to the application that the sign-in user is a part of instead of every single group which might go over the limit. In the splunk side, you enter your AD group name or sometimes with other versions it has to be the object-id of the azure group and then map it to the correct internal splunk role you have created for that team.  
1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome The idea that adding filter in index search improves performance is a general recommendation based ... See more...
1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome The idea that adding filter in index search improves performance is a general recommendation based on the assumption that the bottleneck is number of raw events.  This may not be the case.  There could be another case where index search filter works better when field-based filter applies to index-time extracted fields.  I did observe that in some of my searches index time filter slows search down rather than speeds it up.  I have yet to conduct systematic research but if it doesn't speed up for you, use what works better. 2) Sorry about not mentioning dedup. Because dedup will remove any rows that have empty/null fields, so I put the dedup after join and adding "fillnull" command If I move it to each subsearch, I would need to add fillnull command for each subsearch and it's probably adding a delay.  What do you think? dedup has an option keepemtpy that you can try | dedup keepempty=true ip, risk, score, contact In some of my use cases, keeping all events that has any empty field is a bit too much.  In that case, you can do fillnull before dedup provided that you don't care to print those rows with empty risk, score, or contact.  Something like | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk company IN (compA, compB) | fields ip risk score contact | fillnull risk score contact value=UNSPEC | dedup ip risk score contact ] | foreach risk score contact [eval <<FIELD>> = if(<<FIELD>> == "UNSPEC", null(), <<FIELD>>)] | table ip host risk score contact You can also apply the same technique with split subsearches.  Again, I do not know your data characteristics.  So, whether dedup does any good is for you to find out.
If I can't figure it out, I'll try the simple dashboard.
I tried this but it would not work.  eventtype=builder (user_id IN ($id$) OR user_mail in $email$) | eval .....   I also tried eventtype=builder ((user_id IN ($id$) OR (user_mail IN ($email$))) | ... See more...
I tried this but it would not work.  eventtype=builder (user_id IN ($id$) OR user_mail in $email$) | eval .....   I also tried eventtype=builder ((user_id IN ($id$) OR (user_mail IN ($email$))) | eval ... but that only works if both tokens are populated.
Thanks for the advice!    We´re experiencing the same issues on same RHEL (8.10) Will also check on our test env if this will help   Also interessted in updates, if someone find out something ... See more...
Thanks for the advice!    We´re experiencing the same issues on same RHEL (8.10) Will also check on our test env if this will help   Also interessted in updates, if someone find out something     Regards,  Tobias
Thanks! This was exactly the fix I found. 
Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1... See more...
Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",su... See more...
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response
Never mind. I posted too soon. I replaced "| addcoltotals label=Total " with "| addcoltotals labelfield="Vision ID" label="Total"" and it worked. Thanks.