All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That looks ok, so it means your field called SERVERNAME is not exactly matching those strings. The in() eval function is an exact match. If you just do index=some_index | table SERVERNAME Do you se... See more...
That looks ok, so it means your field called SERVERNAME is not exactly matching those strings. The in() eval function is an exact match. If you just do index=some_index | table SERVERNAME Do you see exactly those strings? If it's an upper/lower case think, you can do ... in(lower(SERVERNAME),"servername1"...
Thanks for your reply. but if the statement is not returning only a specific event user name.  here are two events. group.user_membership.add Event {"actor": {"id": "spr1g8od2gOPLTfra4h7", "type": ... See more...
Thanks for your reply. but if the statement is not returning only a specific event user name.  here are two events. group.user_membership.add Event {"actor": {"id": "spr1g8od2gOPLTfra4h7", "type": "SystemPrincipal", "alternateId": "system@okta.com", "displayName": "Okta System", "detailEntry": null}, "client": {"userAgent": null, "zone": null, "device": null, "id": null, "ipAddress": null, "geographicalContext": null}, "device": null, "authenticationContext": {"authenticationProvider": null, "credentialProvider": null, "credentialType": null, "issuer": null, "interface": null, "authenticationStep": 0, "externalSessionId": "trs-tF3wuwOTRiKM_BZirBk9A"}, "displayMessage": "Add user to group membership", "eventType": "group.user_membership.add", "outcome": {"result": "SUCCESS", "reason": null}, "published": "2024-02-20T15:40:04.384Z", "securityContext": {"asNumber": null, "asOrg": null, "isp": null, "domain": null, "isProxy": null}, "severity": "INFO", "debugContext": {"debugData": {"triggeredByGroupRuleId": "0pr7fprux4jw2hORP4h7"}}, "legacyEventType": "core.user_group_member.user_add", "transaction": {"type": "JOB", "id": "cpb7g4ndq8ZaAR5S14h7", "detail": {}}, "uuid": "5115faa0-d006-11ee-84e8-0b1ac5c0434f", "version": "0", "request": {"ipChain": []}, "target": [{"id": "00u7g4ndmhZ2j2J1i4h7", "type": "User", "alternateId": "USER-EMAIL", "displayName": "USER-NAME", "detailEntry": null}, {"id": "00g7fpoiohiAF2JrY4h7", "type": "UserGroup", "alternateId": "unknown", "displayName": "GROUP-NAME", "detailEntry": null}]} user.authentication.sso Event {"actor": {"id": "00u1p2k8w5CVuKgeq4h7", "type": "User", "alternateId": "USER-EMAIL", "displayName": "USER-NAME", "detailEntry": null}, "device": null, "authenticationContext": {"authenticationProvider": null, "credentialProvider": null, "credentialType": null, "issuer": null, "interface": null, "authenticationStep": 0}, "displayMessage": "User single sign on to app", "eventType": "user.authentication.sso", "outcome": {"result": "SUCCESS", "reason": null}, "published": "2024-02-20T22:25:18.552Z", "signOnMode": "OpenID Connect",}, "target": [{"id": "0oa2n26twxcr3lNWO4h7", "type": "AppInstance", "alternateId": "APPLICATION-NAME": "OpenID Connect Client", "detailEntry": {"signOnModeType": "OPENID_CONNECT"}}, {"id": "0ua2n4im21IccI2Eh4h7", "type": "AppUser", "alternateId": "USER-EMAIL, "displayName": "USER-NAME, "detailEntry": null}]}   And my query Index= "IndexName"(eventType="group.user_membership.add" OR eventType="user.authentication.sso") | rename "target{}.alternateId" AS "targetId" |rename "target{}.type" AS "targetType" | eval User=if(eventType="group.user_membership.add",mvindex(targetId, mvfind(targetType, "User")),"SSO User") |spath "target{}.displayName" |rename target{}.displayName as grpID| eval groupName=mvindex(grpID, 1) | table User groupName | where eventType="user.authentication.sso"   What I'm looking is grab the user name and group name for  the  eventType="group.user_membership.add only , this event type will tell me when the user is added to the particular group then search the User name in the eventType="user.authentication.sso and display the result as group name and user name. Basically I want to get the list of users by group name started using authentication service. Thanks again for your time.   
Yep. +1 on that. HEC does skip some parts of the pipeline (line breaking, often timestamp recognition) but the index-time extractions and evals are applied normally.
Yes, that type of table can be done with chart, so ... | chart count over PF by servername what that won't do is distinguish between which source it came from, which may or may not be relevant to y... See more...
Yes, that type of table can be done with chart, so ... | chart count over PF by servername what that won't do is distinguish between which source it came from, which may or may not be relevant to your use case. Do you care if the count is combined between source 1 and source 2?  
You can generally do this type of logic directly in the stats through eval statements and in eval categorisation in the logic prior to the stats for clarity, so to expand on the 'type' logic I used i... See more...
You can generally do this type of logic directly in the stats through eval statements and in eval categorisation in the logic prior to the stats for clarity, so to expand on the 'type' logic I used in the earlier example, something like this | eval type=case(match(message,"created"),1, match(message,"disconnected"),2, match(message,"other_message"),3) | stats count(eval(if(type<3,_time,null()))) as connection_count count(eval(if(type=3,_time,null()))) as message_count values(type) as types min(eval(if(type<3,_time,null()))) as first_event_time range(eval(if(type<3,_time,null()))) as duration by userId, traceId | addinfo ``` Handle created but no disconnect ``` | eval duration=if(duration=0 AND connection_count=1 AND types=1, info_max_time - first_event_time, duration) ``` Handle disconnect but no created ``` | eval duration=if(duration=0 AND connection_count=1 AND types=2, first_event_time - info_min_time, duration) | stats values(message_count) as message_count sum(duration) as duration by userId so the type is 1-3 depending on text you want to match and then the count eval statements in the stats count the event types and the time calculations exclude the type=3 This is untested, but hopefully you get the picture. There are probably some optimisations there, but it should do what you need
Hi as transforms are handled on typing processor based on this picture https://www.aplura.com/assets/pdf/hec_pipelines.pdf it’s doable. r. Ismo
On Linux you could use command “ss -napt” to see what processes are running an in which port.
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector)... See more...
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector). Thanks
Thanks, but I haven't quite got it.  The query is accepted, but the PODNAME is not being set (everything is under DANG).  index=some_index | eval PODNAME=case(in(SERVERNAME, "servername1", "serv... See more...
Thanks, but I haven't quite got it.  The query is accepted, but the PODNAME is not being set (everything is under DANG).  index=some_index | eval PODNAME=case(in(SERVERNAME, "servername1", "servername2", "servername3"), "ONTARIO", in(SERVERNAME, "servername4", "servername5", "servername6"), "GEORGIA", 1==1, "DANG" ) | timechart span=10min count by PODNAME
Thanks a lot @bowesmana, this helped me. There are some message sent events also there in b/w these connect and disconnect (which I need to calculate how many messages sent for a userId), is there a... See more...
Thanks a lot @bowesmana, this helped me. There are some message sent events also there in b/w these connect and disconnect (which I need to calculate how many messages sent for a userId), is there any way to exclude these message sent while calculating  min(_time), in case where no connect was available in logs.
1. Neat idea. You could adjust that to "scale" from x.00 to x.59. 2. With locale using comma for decimal point it will look worse (but I don't remember if Splunk can do comma).
To be precise, I didn't suggest using evals. This eval is already defined within the TA and it's the reason why the field is empty.
The question is whether you really want LB or do you mistake it with HA. And receiving syslogs directly on Splunk box regardless of if it's UF or HF is not a great idea.
Did you ever find an answer to this? I have the same question.
We are working to link server information to the services in the ServiceNow CMDB. We are looking for example to relationship between CI.  
I'm very disheartened to hear about this. I run the Idaho Falls Splunk Users Group and will present next month on using Windows Event Logs to find intruders. You are welcome to join our group and att... See more...
I'm very disheartened to hear about this. I run the Idaho Falls Splunk Users Group and will present next month on using Windows Event Logs to find intruders. You are welcome to join our group and attend. I will give many examples you can cut and paste into your Splunk instance. You may need to modify them slightly for your environment, but they will give you an idea of how to build additional use cases. Personally, I never re-invent the wheel. There is a lot of detection out there.  that I look for before building my own. I would start by googling "Splunk," "Threat detection," and then '"Splunk threat detection" github.' There are many people who aren't insecure people, like the people you work with who willingly share their talents. We all got where we are with the help of others. It's sad to hear you are being treated this way. Once you join the user groups, you can contact me directly. I will take time to work with you as time permits. I will also point you to resources that will help you grow in the field and use Splunk for building use cases. I'll include a couple of resources here for you. Another thing to keep in mind is that all threat hunting that finds positive activity should lead to a signature. There are tons of threat-hunting Splunk searches out there, and they can also be used as use cases. You may need to tune them (cast a narrower net) before putting them into production, but they will give you a good idea of how to build out detection.  You will find that industry leaders are always sharing their research and knowledge. Jack Crook was one of my mentors when I was new in this career. Jack and I share the same frustration of attending conferences where many share "theories" but not content. He is big on sharing content and actual Splunk searches and use cases. You can follow his blog. I have included it below. As you grow in this career, remember to not be like the others that have treated you so poorly. Remember, this is a very negative reflection on them, not you! I hope that this helps, and I hope to talk soon!  http://findingbad.blogspot.com  https://www.udemy.com/course/cybersecurity-monitoring-detection-lab/?couponCode=ST9MT22024  https://github.com/splunk/security_content/tree/develop/detections/application https://www.detectionengineering.net https://github.com/west-wind/Threat-Hunting-With-Splunk
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no ... See more...
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no effect and the search always returns all of the elements from my KV store. I've read through about a dozen forum threads but haven't found a clear answer to this problem.  Any help with what settings/files need to be configured would be appreciated! I am developing in Splunk Enterprise 9.1.3
Ok, thanks for letting me know. 
I figured it out. I added if statements in my search x=if(avg <= threshold, avg,0) and then went to the source code and assigned x a color. 
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, ... See more...
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, the threshold is different for every organization. Is there a way to dynamically set the threshold on the table for the Response Time column to turn red based on its respective threshold.    For example, organization A will have a threshold of 3, while organization B will have a threshold of 10.  I want the table to display all the organizations,count, the response time, and the threshold.    index | stats count as "volume" avg as "avg_value" by organization, _time, threshold   Kindly help.