All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just did a clean install of 9.1.1 and then Splunk Security Essentials 3.7.1 and am getting this error. @m_pham 
Set up your dropdown so the value for read is 0 and the value for write is 1 then use the token in the search index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=v... See more...
Set up your dropdown so the value for read is 0 and the value for write is 1 then use the token in the search index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values$token$ path=values{$token$} | spath output=dsnames$token$ path=dsnames{$token$} | stats min(values$token$) as min max(values$token$) as max avg(values0) as avg by dsnames$token$ | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)
Hi @PickleRick , there's still something wrong: I used this props.conf: [logstash] TRANSFORMS-00_sethost = set_hostname_logstash TRANSFORMS-01_setsource = set_source_logstash_linux TRANSFORMS-05_s... See more...
Hi @PickleRick , there's still something wrong: I used this props.conf: [logstash] TRANSFORMS-00_sethost = set_hostname_logstash TRANSFORMS-01_setsource = set_source_logstash_linux TRANSFORMS-05_setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit TRANSFORMS-06_setsourcetype_linux_secure = set_sourcetype_logstash_linux_secure TRANSFORMS-50_drop_dead_linux_audit = drop_dead_linux_audit TRANSFORMS-51_drop_dead_linux_secure = drop_dead_linux_secure [linux_audit] TRANSFORMS-0_drop_all_except_proper_ones_linux_audit = drop_dead_all,keep_matching_events_linux_audit #TRANSFORMS-conditional_host_overwrite = conditional_host_overwrite TRANSFORMS-cut_most_of_the_event_linux_audit = cut_most_of_the_event_linux_audit [linux_secure] TRANSFORMS-0_drop_all_except_proper_ones_linux_secure = drop_dead_all,keep_matching_events_linux_secure #TRANSFORMS-conditional_host_overwrite = conditional_host_overwrite TRANSFORMS-cut_most_of_the_event_linux_secure = cut_most_of_the_event_linux_secure and this transforms.conf # set host=host.name [set_hostname_logstash] REGEX = \"host\":\{\"name\":\"([^\"]+) FORMAT = host::$1 DEST_KEY = MetaData:Host # set source=log.file.path per linux_audit e linux_secure [set_source_logstash_linux] REGEX = \"log\":\{\"file\":\{\"path\":\"([^\"]+) FORMAT = source::$1 DEST_KEY = MetaData:Source [set_sourcetype_logstash_linux_audit] REGEX = \"log\":\{\"file\":\{\"path\":\"(/var/log/audit/audit.log)\" CLONE_SOURCETYPE = linux_audit [set_sourcetype_logstash_linux_secure] REGEX = \"log\":\{\"file\":\{\"path\":\"(/var/log/secure)\" CLONE_SOURCETYPE = linux_secure [drop_dead_linux_audit] REGEX = sourcetype:linux_audit DEST_KEY = queue FORMAT = nullQueue [drop_dead_linux_secure] REGEX = sourcetype:linux_secure DEST_KEY = queue FORMAT = nullQueue [drop_dead_all] REGEX = . DEST_KEY = queue FORMAT = nullQueue [keep_matching_events_linux_audit] REGEX = sourcetype:linux_audit DEST_KEY = queue FORMAT = indexQueue [keep_matching_events_linux_secure] REGEX = sourcetype:linux_secure DEST_KEY = queue FORMAT = indexQueue [cut_most_of_the_event_linux_audit] REGEX = (?ms).*\"message\":\"([^\"]+).* FORMAT = $1 DEST_KEY = _raw WRITE_META = true [cut_most_of_the_event_linux_secure] REGEX = (?ms).*\"message\":\"([^\"]+).* FORMAT = $1 DEST_KEY = _raw WRITE_META = true but with this configuration I lost linux_audit tranformation and the remove of extra contents does,'t run. Your approach is correct, but probably there's something wrong i my application. Now I try to understand where's the issue. Thank you again for your help  Ciao. Giuseppe
Hi folks, I have been trying to create a query that would list index name and earliest event from a list of indexes that started getting events only during the selected time range. First I'd po... See more...
Hi folks, I have been trying to create a query that would list index name and earliest event from a list of indexes that started getting events only during the selected time range. First I'd populate the list of indexes using a query like so    index=_internal source=/opt/splunk/var/log/splunk/cloud_monitoring_console.log* TERM(logResults:splunk-ingestion) | rename data.* as * | fields idx     I want to find out which of the indexes out of this list started to index events for the first time only in the, say, last one month. I tried joining this query over idx like so where `tstats` would give me the earliest event timestamp in the last 6 months (a good approximation of whether that index ever got data before the last one month).   index=_internal source=/opt/splunk/var/log/splunk/cloud_monitoring_console.log* TERM(logResults:splunk-ingestion) | rename data.* as * | fields idx | rename idx as index | join index [ | tstats earliest(_time) as earliest_event where earliest=-6mon latest=now index=* by index | table index earliest_event]    But this is only giving me correct results when I specify an index name in the base query. For some reason, it doesn't give me proper results for all indexes. I tried the `map` command as well passing index dynamically but the performance of that query isn't ideal as there are 100s of indexes. I also tried other commands like append but none would give the outcome as expected. I think that there is an obvious solution here that's somehow eluding me. Appreciate any help around this.
To get the count over a sliding window you'd need to do - as I mentioned - streamstats with time_window set to your 5 minutes. Then you can do a simple top command or something like that.
@ITWhisperer Can you please help with my above question? Thanks in advance!
This is an example of an event for EventCode=4726. As you see there are two account name fields which the Splunk App parses as ... two account names     11/19/2023 01:00:38 PM LogName=Security Eve... See more...
This is an example of an event for EventCode=4726. As you see there are two account name fields which the Splunk App parses as ... two account names     11/19/2023 01:00:38 PM LogName=Security EventCode=4726 EventType=0 ComputerName=dc.acme.com SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=1539804373 Keywords=Audit Success TaskCategory=User Account Management OpCode=Info Message=A user account was deleted. Subject: Security ID: Acme\ScriptRobot Account Name: ScriptRobot Account Domain: Acme Logon ID: 0x997B8B20 Target Account: Security ID: S-1-5-21-329068152-1767777339-1801674531-65826 Account Name: aml Account Domain: Acme Additional Information: Privileges -     I want to search for all events with Subject:Account Name = ScriptRobot and then list all Target Account: Account Name. Knowing that multiline regex can be a bit cumbersome - tried the following search string, but it does not work     index="wineventlog" EventCode=4726 | rex "Subject Account Name:\s+Account Name:\s+(?<SubjectAccount>[^\s]+).*\s+Target Account:\s+Account Name:\s+(?<TargetAccount>[^\s]+)"      
@richgalloway's solution is one of the possible answers. It has its pros and cons. The other possibility is to search for all events, do a lookup on them and find non-matched ones. <your_search> | ... See more...
@richgalloway's solution is one of the possible answers. It has its pros and cons. The other possibility is to search for all events, do a lookup on them and find non-matched ones. <your_search> | lookup your_lookup match_field OUTPUT match_field AS new_match_field | where isnull(new_match_field  Typically you'd use mine option later in the search pipeline while @richgalloway 's solution would probably be more suitable in the initial search.
Thank you both. Is there any other way where I can achieve this?
I'll definitely be testing as soon as possible.  I'm in a situation where I'm forced to wait on some other folks before I can move forward.
It's not that easy 1. Often overlooked thing - timechart with span=something means just chopping time into span-sized slices. It does _not_ mean doing a sliding window aggregation. I suppose you ... See more...
It's not that easy 1. Often overlooked thing - timechart with span=something means just chopping time into span-sized slices. It does _not_ mean doing a sliding window aggregation. I suppose you can't do that other way than using streamstats. 2. limit=X with timechart gives you only X top results _overall_, not per each bin.
As with any requirement that is critical and is not sufficiently well docummented - I'd say try and see. Configure your test environment with IPv6, set up a data source for the DBConnect and verify i... See more...
As with any requirement that is critical and is not sufficiently well docummented - I'd say try and see. Configure your test environment with IPv6, set up a data source for the DBConnect and verify if it works. I don't see a reason why it shouldn't as long as your OS has properly configured IPv6 layer and your JVM can handle it (which should be a no-brainer - java has supported IPv6 connectivity for a long time now). But there can always be some unexpected quirks (like for example - some component could expect only IPv4 formatted addresses and couldn't handle IPv6 ones).
The general form is <<some search that returns field 'foo'>> NOT [ | inputlookup mylookup.csv | field foo ] If the lookup file does not contain 'foo' then you'll need a rename command to change wha... See more...
The general form is <<some search that returns field 'foo'>> NOT [ | inputlookup mylookup.csv | field foo ] If the lookup file does not contain 'foo' then you'll need a rename command to change what it has to 'foo'.
I am wondering if there's a way to use the dropdown menu and tokens to display two different results. I am trying to have the dropdown menu have static options of "read" and "write". Read would disp... See more...
I am wondering if there's a way to use the dropdown menu and tokens to display two different results. I am trying to have the dropdown menu have static options of "read" and "write". Read would display this search   index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values0 path=values{0} | spath output=dsnames0 path=dsnames{0} | stats min(values0) as min max(values0) as max avg(values0) as avg by dsnames0 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)   Write would display this search   index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values1 path=values{1} | spath output=dsnames1 path=dsnames{1} | stats min(values1) as min max(values1) as max avg(values1) as avg by dsnames1 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)     The only change in the searches as you can see is just the elements in the multivalue field. If there is a way to append the search and have it shown together, that would be helpful as well.
The timechart command has a limit option that will give you the top n results. | timechart span=5min limit=5 count(action) by applicationname  
@yuanliu Thanks.  That was my initial thought as well, especially since I know the JDBC driver supports IPv6.
Hi Can you please let me know how to frame splunk query compare a field from search with a field from lookup and find the unmatched ones from the lookup table
Hi @AL3Z, sorry but I continue to not understand: are you speaking of an activiy on Splunk or on a target system? if on Splunk, it depends on the action that you associated to the alert(you can cre... See more...
Hi @AL3Z, sorry but I continue to not understand: are you speaking of an activiy on Splunk or on a target system? if on Splunk, it depends on the action that you associated to the alert(you can create a Noteble, send an eMail, write in an index or i a lookup, etc...) if on the target system, it depends on how you ingest logs from the target system. Ciao. Giuseppe
Hi All, I am trying to get the top n users who made calls to some APIs over a span of 5 minutes. For example: By the below query, I can see the chart which made calls for a period of time over ... See more...
Hi All, I am trying to get the top n users who made calls to some APIs over a span of 5 minutes. For example: By the below query, I can see the chart which made calls for a period of time over a span of 5 minutes. Query     timechart span=5min count(action) by applicationname Now, I need to select the top n users (applicationname) which had high number of calls only for a span of 5 minutes. In the below image, need the the users with sudden spikes.
Also remember that trial Cloud installation does not fully support HEC over TLS. I don't remember the details though - either you get only unencrypted connection or get the TLS-enabled port but only ... See more...
Also remember that trial Cloud installation does not fully support HEC over TLS. I don't remember the details though - either you get only unencrypted connection or get the TLS-enabled port but only with some self-signed cert on the server's side.