All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @PickleRick I didn't see that and it wasn't (specifically extracted) in the search
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster.... See more...
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster. Or are you doing something else?
@ITWhispererLook closer. There is an s=identifier pair in the event
It all depends how your fields are delimited/anchored. @marnall 's answer is obvious if you have just two or three words separated by spaces. If your "layout" is different, you have to adjust it.
Two things. 1) If these values are specific to particular sources, I'd add them at the source as _meta entries to an input stanza on the initial forwarder. 2) These will be indexed fields and need ... See more...
Two things. 1) If these values are specific to particular sources, I'd add them at the source as _meta entries to an input stanza on the initial forwarder. 2) These will be indexed fields and need to be added to fields.conf. You have to remember to set INDEXED_VALUE=false for them. Otherwise Splunk will not be able to find them unless you explicitly use the fleld::value syntax.
@meetmshah  Thanks for your suggestion. I will try it definitely   Meanwhile before your suggested workaround. I have tried myself with INGEST_EVAL attribute in transforms.conf with props.conf and ... See more...
@meetmshah  Thanks for your suggestion. I will try it definitely   Meanwhile before your suggested workaround. I have tried myself with INGEST_EVAL attribute in transforms.conf with props.conf and fields.conf and it is working.  
Thanks - what is s in your search by clause as it doesn't appear to be in your data?
Hello @uagraw01, I believe below should work -  props.conf -  [<sourcetype>] TRANSFORMS-add_fields = add_additional_field transforms.conf -  [add_additional_field] REGEX = .* FORMAT = ServerName:... See more...
Hello @uagraw01, I believe below should work -  props.conf -  [<sourcetype>] TRANSFORMS-add_fields = add_additional_field transforms.conf -  [add_additional_field] REGEX = .* FORMAT = ServerName::mobiwick ServerIP::10.30.xx.56.78 WRITE_META = true   The above will add additional 2 fields in the events.  Note that, it will not update the _raw events. Please accept the solution and hit Karma, if this helps!
Along with what @richgalloway suggested (Splunk Security Essentials / SSE), I would also go for Splunk ES Content Update (ESCU, https://splunkbase.splunk.com/app/3449) The analytic stories and their... See more...
Along with what @richgalloway suggested (Splunk Security Essentials / SSE), I would also go for Splunk ES Content Update (ESCU, https://splunkbase.splunk.com/app/3449) The analytic stories and their searches are also available at - https://github.com/splunk/security_content   Please hit Karma, if this helps!
Hello @Raphy AFAIK, there's no default method which mandates having owner assigned while closing the notable event. That being said, you can do either of following -  1. Have a default owner assign... See more...
Hello @Raphy AFAIK, there's no default method which mandates having owner assigned while closing the notable event. That being said, you can do either of following -  1. Have a default owner assigned - https://community.splunk.com/t5/Splunk-Enterprise-Security/Is-it-possible-to-auto-assign-notables-in-Enterprise-Security 2. Schedule a search which periodically give you list of notable where owner is not assigned -  | inputlookup incident_review_lookup | where status="Closed" AND isnull(owner)   Please accept the solution and hit Karma, if this helps!
More often than not it would be because of the default macro not being updated - which has information about "in which index the data resides". As @Bhumi suggested, can you share the name of the TA t... See more...
More often than not it would be because of the default macro not being updated - which has information about "in which index the data resides". As @Bhumi suggested, can you share the name of the TA to assist you more?
Hello @zksvc Was the notable created after you updated the next actions - or was it already generated and later you updated the Correlation Search?
Hello @grep, Can you please try removing whitelisting from the "CIM Setup" page and only have condition available from Macro page? Let me know if it doesn't work and I can troubleshoot.
Hi @darkins , could you share some samples of your logs, highlighting the strings to extract? Ciao. Giuseppe
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields ... See more...
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields having static values. What attribute should I use in props and transform so that I can get both these files? Servername="mobiwick" ServerIP ="10.30.xx.56.78"   Sample raw data : <?xml version="1.0" encoding="utf-8"?><StaLogMessage original_root="ToLogMessage"><MessageId>6cad0986-d4b2-45e2-b5b1-e6a1af3c6d40</MessageId><MessageTimeStamp>2024-11-24T07:00:00.1115119Z</MessageTimeStamp><SenderFmInstanceName>TOP/Top</SenderFmInstanceName><ReceiverFmInstanceName>BPI/Bpi</ReceiverFmInstanceName><StatisticalElement><StatisticalSubject><MainSubjectId>NICKER</MainSubjectId><SubjectId>Prodtion</SubjectId><SubjectType>PLAN</SubjectType></StatisticalSubject><StatisticalItem><StatisticalId>8</StatisticalId><Period><TimePeriodEnd>2024-11-24T07:00:00Z</TimePeriodEnd><TimePeriodStart>2024-11-24T06:00:00Z</TimePeriodStart></Period><Value>0</Value></StatisticalItem></StatisticalElement></SogMessage>
Hi, I posted sample log entries. I am not sure how readable this is.
Nov 24 15:01:43 pphost.company.com 2024-11-24T04:01:43.100466+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=disconnect module= rule= action= helo=sendinghost msgs=1 rcpts=2 r... See more...
Nov 24 15:01:43 pphost.company.com 2024-11-24T04:01:43.100466+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=disconnect module= rule= action= helo=sendinghost msgs=1 rcpts=2 routes=allow_relay,default_inbound,internalnet duration=0.128 elapsed=100 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.614350+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=2 routes=allow_relay,default_inbound,internalnet size=5441 guid=jAIwVNBFVxC8EycWPq7c1MicIX5v1om5 hdr_mid=<42y9nt2euv-1@pphost.company.com> qid=4AO403EC022673 hops-ip=10.20.30.40 subject="MESSAGE SUBJECT" duration=0.125 elapsed=0.127 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.614025+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 cmd=send profile=mail qid=4AO403EC022673 rcpts=RECIPIENT1@company.com,RECIPIENT2@company.com Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505939+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=judge module=none rule=none Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505617+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=dmarc cmd=run arc_result=none arc_result_detail=none arc_trusted_flag=0 arc_override=0 dmarc_detail="nothing to see here" dmarc_record=none dmarcverified= final_dmarc_result=none orig_dmarc_result=none auth_result=none original_auth_result= dyndmarc_override_id= dmarcoverride_type= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505346+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=dkimv cmd=run rule=none dkimresult=none spfheaderfromresult=none duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505036+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=spf cmd=run cmd=eob result=none Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.504665+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=attachment id=0 file=text.html mime=text/html type=html omime=text/html oext=html corrupted=0 protected=0 size=3550 virtual=0 sha256=11dbefae8a521d127ef990b45e998cae68184d56a3d657ee6661f11a8b048d85 a=0 duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499983+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint warn=0 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499957+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint mime=1 score=0 threshold=100 duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499913+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint lint= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.497214+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=headers hfrom=noreply@company.com routes= notroutes=* Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.495249+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt_routes=default_inbound rcpt_notroutes=journal data_routes= data_notroutes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494969+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt=recipeint2@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494950+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt=recipient1@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494892+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data from=noreply@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.489936+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_rcpt r=2 value=recipient2@company.com orcpt=RECIPIENT2@company.com verified= routes=default_inbound notroutes=journal Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.488974+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_rcpt r=1 value=recipient1@company.com orcpt=RECIPIENT1@company.com verified= routes=default_inbound notroutes=journal Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.487458+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_from value=noreply@company.com ofrom=NoReply@company.com size= smtputf8= qid=42y9nt2euv-1 tls= routes= notroutes=tls_fallback host=sendinghost.company.com ip=10.20.30.40 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.486235+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv mod=mail cmd=helo value=sendinghost extended=1 routes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.484673+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=resolve host=sendinghost.company.com resolve=ok reverse=sendinghost.company.com routes=allow_relay notroutes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.065376+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=connect ip=10.20.30.40 country=** lip=50.60.70.80 prot=smtp:smtp hops_active=f routes=internalnet notroutes=firewallsafe,outbound,pp_spoofsafe,spfsafe,tls,xclient_trusted perlwait=0.002
Thank you marnall. I will try this approach and report back.
If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | trans... See more...
If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | transpose 0 | stats sum("row 1") as total over all-time before and after the scheduled search runs and verify the counts
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single... See more...
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single instance on Splunk Cloud. However, when it is installed on a cluster it does not work properly. The add-on when installed is supposed to create some CSVs files and store those in the add. However, when it is installed on a cluster splunk environment, it suddenly will not create the CSVs file and just do not download the files it was supposed to download. Any help or advise is welcome please. This is the add-on below. https://classic.splunkbase.splunk.com/app/7002/#/overview