Activity Feed
- Got Karma for Re: How do I concatenate two fields into a string?. 10-31-2024 03:26 AM
- Got Karma for Re: How do I concatenate two fields into a string?. 07-18-2024 07:16 AM
- Got Karma for Re: Finding search strings when all you have is an expired SID. 07-26-2022 05:32 PM
- Got Karma for Re: How do I concatenate two fields into a string?. 07-22-2022 04:27 PM
- Got Karma for Re: How do I concatenate two fields into a string?. 07-13-2022 09:30 AM
- Got Karma for Re: How do I concatenate two fields into a string?. 06-24-2022 01:04 PM
- Got Karma for Re: Index gzipped files without .gz extension. 02-06-2022 10:39 AM
- Got Karma for Re: DNS Lookup via Splunk. 09-10-2021 02:56 AM
- Got Karma for Re: Cannot get to Splunk Web interface. 08-19-2021 05:10 AM
- Got Karma for Re: How do I concatenate two fields into a string?. 05-26-2021 11:26 AM
- Got Karma for Re: DNS Lookup via Splunk. 02-19-2021 06:58 AM
- Got Karma for Re: Improve speed of "append". 11-18-2020 04:49 AM
- Got Karma for Re: Error binding to LDAP. reason="Can't contact LDAP server".. 09-18-2020 04:04 AM
- Karma ldapsearch - Getting objectSid of group members for althomas. 06-05-2020 12:50 AM
- Karma How to display what exactly has been modified in Saved searches/alerts and dashboard? for ahmedbabour. 06-05-2020 12:50 AM
- Karma Re: Renaming Existing Deployment App for nickhills. 06-05-2020 12:50 AM
- Karma How to integrate Microsoft Cloud App security with Splunk for ips_mandar. 06-05-2020 12:50 AM
- Karma Error on maclookup command -- netaddr not found. for nysdanharrison. 06-05-2020 12:50 AM
- Karma Re: Error on maclookup command -- netaddr not found. for MuS. 06-05-2020 12:50 AM
- Karma Re: Error on maclookup command -- netaddr not found. for MuS. 06-05-2020 12:50 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
2 | |||
3 |
03-25-2020
03:36 AM
That ist he search @jaime.ramirez proposes in his answer:
| datamodel
| rex field=_raw "\"modelName\"\s*:\s*\"(?[^\"]+)\""
| fields modelName
| table modelName
| map maxsearches=40 search="tstats summariesonly count from datamodel=$modelName$ by sourcetype,index | eval modelName=\"$modelName$\""
... View more
03-25-2020
03:34 AM
This answer is also helpful https://answers.splunk.com/answers/597619/list-all-datamodels-with-the-feeds-index-sourcetyp.html
... View more
11-21-2019
02:28 AM
Did you ever manage resolve this?
... View more
11-19-2019
12:34 AM
Wow thats a clever search, thank you
... View more
11-18-2019
11:17 PM
Worked for me thanks.
... View more
11-18-2019
09:45 PM
1 Karma
This is the props that worked for
[source::D:\\path\\to\log\\*]
#Default
#unarchive_cmd = _auto
#On linux
#unarchive_cmd = gzip -cd -
#On windows
unarchive_cmd = splunk-compresstool -g
invalid_cause = archive
NO_BINARY_CHECK = true
is_valid = False
... View more
11-18-2019
08:35 AM
Turns out that i forgot to escape the \ in the win path in props.conf
... View more
11-18-2019
08:29 AM
Hi,
I am trying to index gzipped files that do not have the .gz extension on a window universal forwarder.
First I got the following messages in splunkd.log:
11-18-2019 15:06:33.698 +0100 INFO TailReader - Ignoring file 'D:\path\to\log\messages_xyz' due to: binary
11-18-2019 15:06:33.698 +0100 WARN FileClassifierManager - The file 'D:\path\to\log\messages_xyz' is invalid. Reason: binary.
Looking at how splunk handles gzipped files in props.conf of system/default I tried to put the following props.conf together
[mysourcetype]
invalid_cause = archive
NO_BINARY_CHECK = true
is_valid = False
[source::D:\path\to\log\*]
#Default
#unarchive_cmd = _auto
#On linux
#unarchive_cmd = gzip -cd -
#On windows
unarchive_cmd = splunk-compresstool -g
trying out splunk-compresstool seems to work:
.\splunk-compresstool.exe -g 'xyz'
2019-03-27 16:01:34.000 device kern.info kernel: udevd version 124 started
2019-03-27 16:01:34.000 device kern.info kernel: net eth0: eth0: allmulti set
2019-03-27 16:01:34.000 device kern.info kernel: net eth0: eth0: allmulti set
2019-03-27 16:06:44.000 devicekern.warn kernel: JFFS2 warning: (793) jffs2_sum_write_data: Not enough space for summary, padsize = -376
This is what I see in splunkd.log
11-18-2019 16:55:47.351 +0100 INFO ArchiveProcessor - Handling file=xyz
11-18-2019 16:55:47.351 +0100 INFO ArchiveProcessor - reading path=xyz (seek=0 len=211534)
11-18-2019 16:55:47.402 +0100 INFO ArchiveProcessor - Finished processing file 'xyz', removing from stats
And this is what I see in metrics.log
11-18-2019 17:03:47.471 +0100 INFO Metrics - group=per_source_thruput, ingest_pipe=0, series="xyz", kbps=0, eps=0.03224797474898443, kb=0, ev=1, avg_age=0, max_age=0
Although metrics.log says that ev=1 I do not see any events in the index (and there should be more than 1 event per file)
Is there a possibility to see what the ArchiveProcessor is doing?
Shouldn't Splunk just recognize filetypes without depending on the extension?
Regards Chris
... View more
11-12-2019
05:17 AM
1 Karma
We have the same issue. I think the problem is, that the TA is not designed to fail in a safe way. The error you see in the ui is rather generic.
You can easily change the behaviour by editing the maclookup.py file.
except Exception as e:
logger.error( 'failed to use the netaddr module!' )
#splunk.Intersplunk.generateErrorResults(': failed to use the netaddr module!')
#exit()
line['vendor']="not found"
line['maclookup_error']=str(e)
list.append(line)
You will get a hint about the error in the maclookup_error field, if the lookup is not succesful in offline mode.
@MuS: You wouldn't happen to have time to change the behaviour?
Regards
Chris
... View more
11-05-2019
11:52 PM
Hi pinVie did you manage to integrate those logs? Any chance of sharing what logs you integrated? Regards Chris
... View more
07-08-2018
11:02 PM
1 Karma
https://github.com/droe/xnumon might also help it's "sysmon for macos"
... View more
06-21-2018
02:06 AM
Still not supported in 7.1. It would be nice to have a remoteFrozenPath in 7.2
... View more
06-07-2018
03:44 AM
The lookup contains 7 million entries now. Querying it takes about 5-10 seconds which is ok. But updating it takes up to 45 minutes now. I'll try some other way of doing this.
... View more
05-11-2018
01:39 AM
Hi,
I would like to keep track of the dns queries that are made in our environment. I defined a kv store and a lookup as follows:
transforms.conf
[passive_dns]
collection = passive_dns
external_type = kvstore
fields_list = _key,domain,count,client_count,first,last
collections.conf
[passive_dns]
field.count = number
field.client_count = number
field.domain = string
field.first = time
field.last = time
replicate = false
This is the query I use to populate the fields
index=dns sourcetype=clientdns
| stats count dc(src) as client_count earliest(_time) as first latest(_time) as last by query
| rename query as domain
| append maxout=0 [inputlookup passive_dns ]
| stats max(client_count) as client_count sum(count) as count min(first) as first max(last) as last values(_key) as _key by domain
| outputlookup passive_dns
The idea is to keep track how many times a domain was resolved and when it was first/last resolved (I will add query types & ips later on)
If I run this query every 15 minutes it will start taking more than 15 minutes to run after a couple of executions because the kv store is growing. (We have about 4 mio DNS resolutions per hour)
This is obviously not the right way to do it. Is there a way I can either not rewrite the entire lookup on every execution and only update the required entries or/and increase the speed of the kv store updates?
Regards
Chris
Update
I was able to speed up the search by filtering out entries in the lookup that did not change:
index=dns sourcetype=clientdns
| stats count dc(src) as client_count earliest(_time) as first latest(_time) as last by query
| rename query as domain
|eval hash_before=md5(""+domain+first+last+count+client_count)
| append maxout=0 [inputlookup passive_dns ]
| stats max(client_count) as client_count sum(count) as count min(first) as first max(last) as last values(_key) as _key values(hash_before) as hash_before by domain
| eval hash_after=md5(""+domain+first+last+count+client_count)
| eval entry_status=case(isnull(hash_before), "new", hash_before!=hash_after,"changed",hash_before==hash_after,"same",true(),"strange")
| where entry_status!="same"
| fields domain client_count count first last
| outputlookup append=True passive_dns
The query time is less than 60s at the moment. The lookup will grow quite a bit. I will find out how quickly the performance will degrade (and if the lookup is still usable once it contains millions of entries).
Update 2
I noticed that | append maxout=0 hits maxresultrows in limits.conf [searchresults]. I should probably not increase that limit to much.
... View more
- Tags:
- kvstore
05-02-2018
12:48 AM
1 Karma
Hi,
has anyone implemented a simple xml dashboard that shows a table with checkboxes? I would like to update a lokup/kv store with information of the result rows for the selected rows. ES has this feature in the incident review dashboard:
The "Edit Selected" OR "Edit All x Matching Events" should then update the lookup. My programming skills are limited, I'd be gratefull for an example that I can modify/adapt to my needs.
Regards
Chris
... View more
- Tags:
- splunk-enterprise
04-26-2018
07:16 AM
Hi, i would like to do something very similar. I would like to add checkboxes to every row and then have a button that does a mass update to the kv store. Would you mind sharing the addMonitor function?
... View more
03-14-2018
03:02 AM
One of my team mates found a solution:
| dbxquery connection="ePO_server" query="SELECT
EPOLeafNode.NodeName, EPOLeafNode.LastUpdate,
EPOProdPropsView_THREATPREVENTION.verDAT32Major AS AMCoreContent,
EPOComputerProperties.UserName AS [user] FROM EPOLeafNode LEFT JOIN
EPOProdPropsView_THREATPREVENTION ON EPOLeafNode.AutoID =
EPOProdPropsView_THREATPREVENTION.LeafNodeID LEFT JOIN EPOComputerProperties ON
EPOLeafNode.AutoID = EPOComputerProperties.ParentID"
Regards
Chris
... View more
03-14-2018
02:22 AM
Hi,
I used the following DB Query to find out at what VSE version our clients have:
| dbxquery connection="ePO_server" query="SELECT
|EPOLeafNode.NodeName, EPOLeafNode.LastUpdate,
|EPOProdPropsView_VIRUSCAN.datver AS vse_dat_version,
|EPOComputerProperties.UserName AS [user] FROM EPOLeafNode LEFT JOIN
|EPOProdPropsView_VIRUSCAN ON EPOLeafNode.AutoID =
|EPOProdPropsView_VIRUSCAN.LeafNodeID LEFT JOIN EPOComputerProperties ON
|EPOLeafNode.AutoID = EPOComputerProperties.ParentID"
We are moving to ENS now, has anyone gone through the trouble of implementing a similar query for ENS?
I know that this is not a Splunk problem, I will also post on the McAfee forums.
Regards
Chris
... View more
01-24-2018
02:22 PM
Thx, I have also opened a case with splunk
... View more
01-24-2018
06:54 AM
Hi
going through sysmon logs I noticed, that the splunkforwarder (version 6.6.3) starts AcroRd32.exe on Windows clients.
Does any one know why? We are not indexing/monitoring the pdfs or the paths where the pdfs are located. Can this be turned off?
This is a sample event:
01/17/2018 03:17:38 PM
LogName=Microsoft-Windows-Sysmon/Operational
SourceName=Microsoft-Windows-Sysmon
EventCode=1
EventType=4
Type=Information
ComputerName=server.domain.org
User=NOT_TRANSLATED
Sid=S-1-5-18
SidType=0
TaskCategory=Process Create (rule: ProcessCreate)
OpCode=Info
RecordNumber=4300197
Keywords=None
Message=Process Create:
UtcTime: 2018-01-17 14:17:34.391
ProcessGuid: {F0E459B7-5AFE-5A5F-0000-00109C69EE2E}
ProcessId: 12428
Image: C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe
CommandLine: "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --type=renderer "C:\Users\user\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Outlook\9A5H81Q9\Untitled (28).pdf"
CurrentDirectory: C:\Users\user\AppData\Local\Microsoft\Windows\Temporary Internet Files\
User: DOMAIN\user
LogonGuid: {F0E459B7-F487-5A5E-0000-0020274C0F00}
LogonId: 0xf4c27
TerminalSessionId: 1
IntegrityLevel: Low
Hashes: MD5=F7C513664BD4A9DB4ABBEB2B5E4E01D2,IMPHASH=1439821F22F484CB770EECF65574FF20
ParentProcessGuid: {F0E459B7-4701-5A5F-0000-00102595771B}
ParentProcessId: 11408
ParentImage: C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe
ParentCommandLine: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe" --ps2
Regards
Chris
... View more
01-18-2018
03:52 AM
I had the sam issue on 6.62 we upgraded to 7.0.1 now and it is fixed.
... View more
01-10-2018
08:11 AM
Hi,
This article describes how NTLM v1 and LM usage can be detected: https://blogs.technet.microsoft.com/askds/2012/02/02/purging-old-nt-security-protocols/
Based on the article I came up with the following Wireshark filter:
(ntlmssp.auth.ntresponse) ||( !(ntlmssp.auth.lmresponse == 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00) && (ntlmssp.auth.lmresponse))
Is there a way I could configure/abuse the Splunk App for Stream to log events based on that filter?
It will probably be difficult/impossible to configure a regex based field using "src_content" or "dest_content"
In Splunk_TA_stream/default/vocabularies/smb.xml or Splunk_TA_stream/default/streams/smb I do not see any Fields that correspond to the Lan Manager Response OR NTLMv1 Response
Running Strings on streamfwd and grepping for smb shows that there is a SMBProtocolHandler implemented. So I suspect that the binary has to be modified. Is this assumption correct?
Regards
Chris
... View more
12-21-2017
03:57 AM
It is probably best to contact Splunk, if you need the data from unified logging. That way they can push SPL-129734 internally. For now we rely on some scripts from the Unix TA, I have heard that others use https://osquery.io/
... View more
12-03-2017
11:20 PM
The Splunkforwarder can be installed and configured to index information from unified logging. There is just no out of the box functionality for that.
... View more