It may be complicated, but I think it's necessary. Perhaps it could be better, though. Even if the SH did expand the query (and maybe it does) before sending to the peers, that's just a part of wha...
See more...
It may be complicated, but I think it's necessary. Perhaps it could be better, though. Even if the SH did expand the query (and maybe it does) before sending to the peers, that's just a part of what the bundle is used for. Search-time field extractions and lookups done by the indexers make the query more efficient.
Thanks a lot @richgalloway That behavior of SH seems to be unnecessarily complicated. Instead of sending all of those KO bundles to indexers, could not SH first expand SPL query (to resolve all of...
See more...
Thanks a lot @richgalloway That behavior of SH seems to be unnecessarily complicated. Instead of sending all of those KO bundles to indexers, could not SH first expand SPL query (to resolve all of the names/variables which are search-time) and then sent it to indexers ? Thanks, Michal
The first example runs entirely on the Search Head where the lookup definition is available. The second example runs on the indexers, which apparently is unaware of the lookup definition. Either th...
See more...
The first example runs entirely on the Search Head where the lookup definition is available. The second example runs on the indexers, which apparently is unaware of the lookup definition. Either the app defining the lookup is not installed on the indexers or the lookup file is blocked from the knowledge bundle ([replicationDenyList] in distsearch.conf).
Thanks @richgalloway So just to confirm: "To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query. " Example: "index=_au...
See more...
Thanks @richgalloway So just to confirm: "To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query. " Example: "index=_audit eventtype=splunk_access". Since event type extraction is search-time (not index-time) indexer does not have definition for that event type. Because of this SH need to push to indexer definition for that event type: [splunk_access] search = index=_audit "action=login attempt" NOT "action=search" Once that is done, indexer will actually expand original SQL query to "index=_audit index=_audit action=login attempt NOT action=search" and will be able to execute the query correctly. The same would happen with most of the other Knowledge Objects. Including all the search time field extractions. So the summary would be: Search Head needs to push Knowledge Objects to indexer, because for indexer those are "unknown variables/names". Indexer does not have those definitions and does not know how to expand/execute SQL queries using those KOs. This is applicable only to search-time operations/objects defined on SH (index-time related configurations like TRANSFORMS should be already on the indexer). Could you please confirm @richgalloway all of this is correct ? Thanks !
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected): | makeresults format=csv data="filename
calc.exe" | lookup isWindowsSystemFile_...
See more...
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected): | makeresults format=csv data="filename
calc.exe" | lookup isWindowsSystemFile_lookup filename Whilst this: index=sandbox | eval filename="calc.exe" | lookup isWindowsSystemFile_lookup filename throws an error with message: ... The lookup table 'isWindowsSystemFile_lookup' does not exist or is not available. The isWindowsSystemFile_lookup is provided by Splunk Security Essentials. Hmm, I'm on splunk cloud. Thanks, Kevin
Hi @Ryan.Paredez Thank you for sharing the discussion link. I've carefully read the responses and tried the suggested solutions, but unfortunately, they didn't fix the problems I'm having. If you...
See more...
Hi @Ryan.Paredez Thank you for sharing the discussion link. I've carefully read the responses and tried the suggested solutions, but unfortunately, they didn't fix the problems I'm having. If you have any more suggestions or information, please share it with me.
|mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services
|streamstats by services
|timechart span=1h values(Transact...
See more...
|mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services
|streamstats by services
|timechart span=1h values(Transaction_count) by services Results: _time get put delete 2024-01-22 09:00 7654.000000 17854.000000 9876.000000 2024-01-22 10:00 5643.000000 2345.000000 1267.000000 From the above query we want to calculate percentage between 2 values. For example : For get field , we want percentage between 2 hours(09:00 and 10:00) 7654.000000/5643.000000*100 how to do this??
Hello @VashisthaPandya, Do you really want to have a "real-real" traffic or dummy would work? Because you can generate dummy Windows EventCode traffic through EventGen (https://splunkbase.splunk.com/...
See more...
Hello @VashisthaPandya, Do you really want to have a "real-real" traffic or dummy would work? Because you can generate dummy Windows EventCode traffic through EventGen (https://splunkbase.splunk.com/app/1924) and deploy it and focus on writing effective search query.
The REPORT setting is incorrect. The "REPORT-" keyword identifies this as a report setting and so must be on the left side ("Name"). The name of the transform stanza goes on the right side ("Value").
Most of the work of a query is done by the indexers so they need to know as much about the search as possible. That is what the knowledge bundle is for. To know what results to return to the SH, th...
See more...
Most of the work of a query is done by the indexers so they need to know as much about the search as possible. That is what the knowledge bundle is for. To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query. The also need to know which fields to extract and how to extract them. It's all part of the map/reduce process where the search activity is divided among many peers to make the query faster. Information sent in the bundle does not modify the settings in the indexer. The bundle supplements the information the peer read from its .conf files. That supplementary data is not visible to either btool or splunk show.
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT o...
See more...
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT option of Local Windows Networking Monitoring which seems to work for the video I was following to do that Context of DDOS: SO we are using hping3 tcp syn flood attack but their logs aren't getting in through my newly added data input source All the other network logs are generating like network from my gcp to rdp to server and back but these are the only type of logs that are showing Now if I were to guess the problem it might be that there are two IP provided to us by GCP Internal and External IP I've attacked on both but there is no difference in the incoming LOGS I've checked the connectivity between the two VM's on GCP i.e. Win and Ubuntu using ping and telnet Also have turned off the rdp win's firewall also added a firewall rule that allows ingress tcp packets over the port 80 and 21 (which we are attacking on) So my guess ultimately is that the server of GCP is blocking these type of packets I'm still not sure how all these things work(I'm a AI dev you see this is not my field) SO Please help me if you can and have time to!| THANK YOU for reading my question and taking your time for doing it IF you have any other questions that you need the answers for to help me be free to ask away as much you guys want
Table 1 has single values for the columns per each service, while Table 2 has multiple rows per service. You could duplicate the rows of Table1 to fill the rows of Table 2, or you could make the fiel...
See more...
Table 1 has single values for the columns per each service, while Table 2 has multiple rows per service. You could duplicate the rows of Table1 to fill the rows of Table 2, or you could make the fields of Table 2 turn into multi-value fields in Table 1. E.g. to do the latter (multi-value field) option: <query 1>
| append [ <query2> ]
| stats values(*) as * by service
Hi @mariamms , here you can find all the information you need about HEC: at first https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf (page 28) https://docs.splunk.com/...
See more...
Hi @mariamms , here you can find all the information you need about HEC: at first https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf (page 28) https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/ShareHECData https://www.youtube.com/watch?v=qROXrFGqWAU In few words, you have to creare an HEC received creating a token that must be passed to the sender. You can also have an intermediate Load Balancer for HA features, in this case, you must have the same token in all the receivers. About Console, what do you mean? HEC hasn't any console. If your're speaking of the Cluster consoles, you have to search for Cluster Master (for Indexer Cluster) and SH Deployer (for Search Head Cluster). You can find information at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Aboutclusters and https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/AboutSHC At least there's also a Monitorig Console and you can find information at https://docs.splunk.com/Documentation/Splunk/9.2.0/DMC/DMCoverview Ciao. Giuseppe
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : ...
See more...
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : and this is the field extraction : but the logs does not parse it indexed as one line Am i missing something ?
Ideas votes are a precious resource. As @isoutamo suggested, try bugging the PM periodically through Slack. Hopefully, they'll pick it up without the minimum number of required votes.