All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

not sure what else to put, this is what my data looks like   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3   i want these two lines to display as   field1               fiel... See more...
not sure what else to put, this is what my data looks like   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3   i want these two lines to display as   field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3
It is 12 years later, and this is still an issue. You cannot set 'requireClientCert=true' in server.conf on, for example, a Deployment Server, and have a working Web UI on that Deployment Server.  ... See more...
It is 12 years later, and this is still an issue. You cannot set 'requireClientCert=true' in server.conf on, for example, a Deployment Server, and have a working Web UI on that Deployment Server.  Setting 'requireClientCert=true' in server.conf still breaks the Web UI in late November 2024.
Hi @fatsug, Adding to what was previously discussed, you can break down the behavior and where it occurs by merging $SPLUNK_HOME/etc/system/default/props.conf and $SPLUNK_HOME/etc/apps/Splunk_TA_nix... See more...
Hi @fatsug, Adding to what was previously discussed, you can break down the behavior and where it occurs by merging $SPLUNK_HOME/etc/system/default/props.conf and $SPLUNK_HOME/etc/apps/Splunk_TA_nix/default/props.conf. @PickleRick's use of btool in the last question hints at how to do this: $SPLUNK_HOME/bin/splunk btool --debug props list lastlog The source type will also inherit [default] settings provided by any app and any additional [lastlog] settings you may have. Looking at the settings relevant to the question: [lastlog] KV_MODE = multi LINE_BREAKER = ^((?!))$ SHOULD_LINEMERGE = false TRUNCATE = 1000000 The LINE_BREAKER, SHOULD_LINEMERGE, and TRUNCATE settings tell Splunk the boundaries of the event.  These settings are used by the linebreaker and aggregator on a heavy forwarder or indexer; they are not, except under specific conditions, used by a universal forwarder. The SHOULD_LINEMERGE setting disables reassembly of multiple lines (delineated by the LINE_BREAKER setting) into a single event; in this case, the LINE_BREAKER setting does that work for us more efficiently. As @PickleRick noted, the regular expression ^((?!))$ matches nothing. When lastlog.sh is executed, its entire output up to TRUNCATE = 1000000 bytes (~1 MB) is indexed as one event: USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 If Splunk_TA_nix is not installed on the search head, then the sample above is what you would see as a single event in your search results. What happens if the lastlog.sh output is longer than 1,000,000 bytes? All data after the 1,000,000th byte is simply truncated and lost. For example: USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 ... ~1,000,000 bytes of data jsmit If the "t" in "jsmit" is the 1,000,000th byte, then that's the last byte indexed, and everything after "jsmit" is truncated. If Splunk_TA_nix is installed on the search head, then the KV_MODE = multi setting tells Splunk to pipe the events through the multikv command before returning the results. On disk, there is only one event, and that event has been split into six separate events using the first line for field names. If Splunk_TA_nix is not installed on the search head, you can include the multikv command in your search to produce the same results: index=main sourcetype=lastlog | multikv You can also test multikv directly using sample data: | makeresults | eval _raw="USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57" | multikv If you have Splunk_TA_nix installed on your forwarders, your heavy forwarders, your indexers, and your search heads, then everything is "OK." Technically, your heavy forwarders are doing the parsing work instead of your indexers; however, if your indexers are Linux, you probably want to run lastlog.sh on them, too. Eventually, though, you'll realize that auditd, /var/log/auth.log, /var/log/secure, etc. are better sources of login data, although /var/log/wtmp and lastlog.sh are useful if you want regular snapshots of login times relative to wtmp retention on your hosts. You may find that log rotation for wtmp is either misconfigured or not configured at all on older Linux hosts.
Assuming s has already been extracted  | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as D... See more...
Assuming s has already been extracted  | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" | stats list(ConnectingIP) as ConnectingIP list(ReverseLookup) as ReverseLookup by Sender
Thanks @PickleRick I didn't see that and it wasn't (specifically extracted) in the search
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster.... See more...
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster. Or are you doing something else?
@ITWhispererLook closer. There is an s=identifier pair in the event
It all depends how your fields are delimited/anchored. @marnall 's answer is obvious if you have just two or three words separated by spaces. If your "layout" is different, you have to adjust it.
Two things. 1) If these values are specific to particular sources, I'd add them at the source as _meta entries to an input stanza on the initial forwarder. 2) These will be indexed fields and need ... See more...
Two things. 1) If these values are specific to particular sources, I'd add them at the source as _meta entries to an input stanza on the initial forwarder. 2) These will be indexed fields and need to be added to fields.conf. You have to remember to set INDEXED_VALUE=false for them. Otherwise Splunk will not be able to find them unless you explicitly use the fleld::value syntax.
@meetmshah  Thanks for your suggestion. I will try it definitely   Meanwhile before your suggested workaround. I have tried myself with INGEST_EVAL attribute in transforms.conf with props.conf and ... See more...
@meetmshah  Thanks for your suggestion. I will try it definitely   Meanwhile before your suggested workaround. I have tried myself with INGEST_EVAL attribute in transforms.conf with props.conf and fields.conf and it is working.  
Thanks - what is s in your search by clause as it doesn't appear to be in your data?
Hello @uagraw01, I believe below should work -  props.conf -  [<sourcetype>] TRANSFORMS-add_fields = add_additional_field transforms.conf -  [add_additional_field] REGEX = .* FORMAT = ServerName:... See more...
Hello @uagraw01, I believe below should work -  props.conf -  [<sourcetype>] TRANSFORMS-add_fields = add_additional_field transforms.conf -  [add_additional_field] REGEX = .* FORMAT = ServerName::mobiwick ServerIP::10.30.xx.56.78 WRITE_META = true   The above will add additional 2 fields in the events.  Note that, it will not update the _raw events. Please accept the solution and hit Karma, if this helps!
Along with what @richgalloway suggested (Splunk Security Essentials / SSE), I would also go for Splunk ES Content Update (ESCU, https://splunkbase.splunk.com/app/3449) The analytic stories and their... See more...
Along with what @richgalloway suggested (Splunk Security Essentials / SSE), I would also go for Splunk ES Content Update (ESCU, https://splunkbase.splunk.com/app/3449) The analytic stories and their searches are also available at - https://github.com/splunk/security_content   Please hit Karma, if this helps!
Hello @Raphy AFAIK, there's no default method which mandates having owner assigned while closing the notable event. That being said, you can do either of following -  1. Have a default owner assign... See more...
Hello @Raphy AFAIK, there's no default method which mandates having owner assigned while closing the notable event. That being said, you can do either of following -  1. Have a default owner assigned - https://community.splunk.com/t5/Splunk-Enterprise-Security/Is-it-possible-to-auto-assign-notables-in-Enterprise-Security 2. Schedule a search which periodically give you list of notable where owner is not assigned -  | inputlookup incident_review_lookup | where status="Closed" AND isnull(owner)   Please accept the solution and hit Karma, if this helps!
More often than not it would be because of the default macro not being updated - which has information about "in which index the data resides". As @Bhumi suggested, can you share the name of the TA t... See more...
More often than not it would be because of the default macro not being updated - which has information about "in which index the data resides". As @Bhumi suggested, can you share the name of the TA to assist you more?
Hello @zksvc Was the notable created after you updated the next actions - or was it already generated and later you updated the Correlation Search?
Hello @grep, Can you please try removing whitelisting from the "CIM Setup" page and only have condition available from Macro page? Let me know if it doesn't work and I can troubleshoot.
Hi @darkins , could you share some samples of your logs, highlighting the strings to extract? Ciao. Giuseppe
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields ... See more...
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields having static values. What attribute should I use in props and transform so that I can get both these files? Servername="mobiwick" ServerIP ="10.30.xx.56.78"   Sample raw data : <?xml version="1.0" encoding="utf-8"?><StaLogMessage original_root="ToLogMessage"><MessageId>6cad0986-d4b2-45e2-b5b1-e6a1af3c6d40</MessageId><MessageTimeStamp>2024-11-24T07:00:00.1115119Z</MessageTimeStamp><SenderFmInstanceName>TOP/Top</SenderFmInstanceName><ReceiverFmInstanceName>BPI/Bpi</ReceiverFmInstanceName><StatisticalElement><StatisticalSubject><MainSubjectId>NICKER</MainSubjectId><SubjectId>Prodtion</SubjectId><SubjectType>PLAN</SubjectType></StatisticalSubject><StatisticalItem><StatisticalId>8</StatisticalId><Period><TimePeriodEnd>2024-11-24T07:00:00Z</TimePeriodEnd><TimePeriodStart>2024-11-24T06:00:00Z</TimePeriodStart></Period><Value>0</Value></StatisticalItem></StatisticalElement></SogMessage>
Hi, I posted sample log entries. I am not sure how readable this is.