All Topics

Top

All Topics

"Hello everyone, how are you? I am trying to perform a search in the Cylance Protect app, where I have the following event as an example: 2023-02-08T13:25:10.484000Z sysloghost CylancePROTECT - - -... See more...
"Hello everyone, how are you? I am trying to perform a search in the Cylance Protect app, where I have the following event as an example: 2023-02-08T13:25:10.484000Z sysloghost CylancePROTECT - - - Event Type: Threat, Event Name: threat_changed, Device Name: NB-2071, IP Address: (172.47.102.56), File Name: main.exe, Path: C:\DIEF2023.2.0, Drive Type: Internal Hard Drive, File Owner: AUTORIDADE NT\SISTEMA, SHA256: 8B2F7F3120DD73B2C6C4FEA504E60E65886CC9804761F8F1CBE18F92CA20AC44, MD5: 70D778C4A1C17C2EFD2D7F911668E887, Status: Quarantined, Cylance Score: 100, Found Date: 2/8/2023 1:25:10 PM, File Type: Executable, Is Running: False, Auto Run: True, Detected By: FileWatcher, Zone Names: (HOMOLOGAÇÃO), Is Malware: False, Is Unique To Cylance: False, Threat Classification: PUP - Generic, Device Id: 6c4e6c22-bf96-4de4-897b-cea83b8989b4, Policy Name: Política de Proteção N3 - Bloqueio In this case, note the SHA256 parameter, it is the basis of the Panel that I need to create. The thing is that I need to generate a chart that presents the number of different SHA256s that were detected month by month. Observing the following rules: If an SHA256 was detected in January, the chart should count one If the same SHA256 is detected again in February, the chart should count it again However, if the same SHA256 was detected twice in the same month, the chart will only count as one. I tried various different ways to perform this search. However, I was not successful. Here are some examples: eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | timechart span=1mon count as Total   this function works, but it's counting the number of monthly events, that is, the same SHA256 is being counted more than once eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | dedup SHA256 | stats count as Total by month | timechart span=1mon sum(Total) as Total This time the error was "No Results Found" eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | stats count by SHA256, month | timechart span=1mon sum(count) as Total  Again the error of no results found Thank you in advance."
One of my coldbucket indexer lost connection to the SAN and now I have a lot of data and files in my ColdBucket lost+found directory. 1. How can I re-ingest/put these data in their actual folders i... See more...
One of my coldbucket indexer lost connection to the SAN and now I have a lot of data and files in my ColdBucket lost+found directory. 1. How can I re-ingest/put these data in their actual folders in the ColdBucket? 2. What can I do with the data in the lost + found directory located in the ColdBucket?
Getting This error with either HTTPS or SSH connectivity tests. App 'Git' started successfully (id: xxxxxxxxx) on asset: 'bit_bucket'(id: 3) Loaded action execution configuration Querying to ve... See more...
Getting This error with either HTTPS or SSH connectivity tests. App 'Git' started successfully (id: xxxxxxxxx) on asset: 'bit_bucket'(id: 3) Loaded action execution configuration Querying to verify the repo URI Configured repo URI: either https or ssh Error while calling configured URI Connectivity test failed. No action executions found.    I have no real logs and when i try to run it within a playbook I get this error when using the connect ssh option   Failed to push: Check credentials and git server availability
Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago cal... See more...
Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago called "Perfinsights". I have searched for it in Splunkbase and google but couldn't find it anywhere! Can someone please point me in the right direction on this? The Tech Talk can be found here: https://www.youtube.com/watch?v=UXFIKMJHwgs
Hello, Can I mix different type of disks, for example SSDs and HDDs, while installing search heads or indexers, in on-premise environment?  Best regards,   
Announcing Splunk User Groups: Plugged-In For over a decade Splunk User Groups have helped their local Splunk users find success and provide networking opportunities for the attendees. We are ex... See more...
Announcing Splunk User Groups: Plugged-In For over a decade Splunk User Groups have helped their local Splunk users find success and provide networking opportunities for the attendees. We are excited to announce a new program for 2023:  Splunk User Groups: Plugged-In. Connect with your local community and Splunk experts, charge up your Splunk knowledge, and share your valuable experience with your peers! The Plugged-In program allows user group leaders to hold a bespoke event experience fully sponsored by Splunk. We want to help our user group leaders solve the common challenges they have, such as sourcing technical content and growing their chapter membership. Splunk User Group leaders will be holding Plugged-In events throughout 2023, and we don’t want you to miss out on any of the great content and fun!  It doesn’t matter if you’re new to Splunk or a veteran, we will partner with leaders of qualified user groups to create an event for you. Keep an eye out for any upcoming Plugged-In events with the Splunk user group chapters of which you are a member.   When will a Plugged-In user group event happen near me? Become a member of your local Splunk user group to be notified of any events, including any upcoming Plugged-In events. Will these be in-person, virtual, or hybrid events?  A Plugged-In event could be either of these. Your user group leader will partner with the Splunk Community Team to create a great event for your chapter. Why should I join my local Splunk User Group? Because they’re fun! Splunk User Groups provide geo-based options for you to connect with peers, learn the latest technologies, and dive deep into all-things-Splunk. Members come from all kinds of industries, companies, and backgrounds, ranging from novice to expert levels. Connect with others in your local Splunk community to collaborate and share ideas!     Questions? Contact the Splunk Community Team at usergroups@splunk.com
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10... See more...
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10.1.1.50 What are ways to list down the values which are not coming to splunk. Let's say 10.1.1.2 and 10.1.1.45 are not coming to splunk. Then I need those missing values to be listed in statistical way to create an alert for missing ip address.  What are ways to achieve this. Please help Thanks in advance. 
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum ... See more...
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum by our MSU factor to determine the MSUs used by a specific report class for any given day.  I believe what I need to do is store the result from the timechart statement as a new variable, to be able to multiply that variable by the MSUFactor. I have not had any luck in  trying a combination of 'eval' statements or by leveraging the AS keyword to store the result into a variable I can further work with.  I appreciate any help you may be able to offer! index=z* MFSOURCETYPE=SMF030 Subtype=2 `calccpusecs` | where Rptcls IN("RHOTBAT","RPDBATLO","RPDBATMD","RSAGBAT","RTSTBAT") | eval MSUFactor=(37209.3023/5/216000) | timechart span=1d sum(cpusecs) by Rptcls | addcoltotals
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we... See more...
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we need to upgrade storage engine from mmap to wiredTiger on Indexer Cluster. 
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App... See more...
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App but it is not working correctly.  I don't think the PS Tech knew how to install it or use it. Thank you
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c... See more...
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.578,14,ER40011,"Unauthorized" 123,SVCB,DEF456,DEF~LG~Login~1.0,10.0.1.2,cd63b821-a96c-11ed-8a7c-00000a070dc2,cd63b820-a96c-11ed-8a7c-00000a070dc2,2023-02-10-12:00:28.578,10,0,"OK" 123,SVCC,ZHY789,123~XD-ABC~OtherSvc~2.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.566,321,ER00000,"Success" 456,ABC1,,DEFAULT~ENTL~ASvc~1.0,10.0.1.2 ,b70a2c11-286f-44da-9013-854acb1599cd,,2023-02-10-11:59:44.830,14,ER00000,"Success" 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL" 456,ZHY3,ZHY45678,DEF~AB~ANOTHER_SVC121~1.0,10.0.0.1 ,19b79e9b-e2e2-4ba2-a7cf-e65ba8da5e7b,,2023-02-10-11:58:58.813,,27,ER40011,"Unauthorized"     Users will often search for individual items separated by the ~ character. E.g., index=myindex sourcetype=the_above_sourcetype *LG* My purpose is to reduce the need for leading wildcards in most searches here, as this is a high-volume dataset by adding the minor segmentation character '~' at index time. I've tried these props.conf and segmenters.conf without success. Could anyone provide any insight? <indexer> SPLUNK_HOME/etc/apps/myapp/local/props.conf   [the_above_sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX = ^([^,]*,){7} TIME_FORMAT = %Y-%m-%d-%H:%M:%S.%3Q TRUNCATE = 10000 MAX_TIMESTAMP_LOOKAHEAD=50 SEGMENTATION = my-custom-segmenter   SPLUNK_HOME/etc/apps/myapp/local/segmenters.conf   [my-custom-segmenter] MINOR = / : = @ . - $ # % \\ _ ~ %7E     Added those and bounced my test instance, but I still cannot search for  index=myindex sourcetype=the_above_sourcetype LG -- does not return results such as these, however *LG* as a term does return it. 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL"
We want to use ITSI with universal forwarders (windows and nix).  Which is best practice, enable the metrics inputs in UF local or windows/nix add on local? 
I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of ... See more...
I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of the prefixes in the second list.  Example:  Inputlookup                                         Lookup Hostname                                             Hostname Prefix appletown                                             town treeville                                                   tree I would like to create a search showing a list of hostnames from the first list that do not contain any of the hostnames in the second. 
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version ... See more...
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version 1.0.4 for the same app. Can someone from support provide information on why that is?
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a s... See more...
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a solution for this. The UF doesn't seem to verify the CA provided in 'server.conf/sslConfig/caCertFile' as well. Thanks for your help!
I have to set up an alert with corn schedule from 9pm to 7am. when i tried to get it from crontab it was not coming. how can i create the corn schedule  for this timings.
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids l... See more...
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids left in the list. However, after the failed it won't continue down to the code to what I have written down below to remediate. How can I get the playbook to continue after a failed action and just mark it failed rather than exiting the playbook?
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ...... See more...
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ..... Can you please help? Thanks!
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really e... See more...
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really exists. Is there a way to identify distribution groups in the search?  I have access to the index and I can find distribution groups but I cant just look for DL's them selves. Thanks in Advance
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Logi... See more...
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Loginuser. I just want the username and userId from userLogin field to maintain single record of each user but to know the exact logged in user records I've to display the userLogin field as well and have to maintain the user's earliest login dateTime. Currently i'm using csv lookup and has records of past three months in the lookup file but in future if I expand for past 6 months I've to update the earliest login dateTime for the existing user from lookup and append new user details with their login dateTime. I'm bit worried about the performance if my records goes higher in the lookup file. Here's the query i've managed to write, but I'm struggling to track the earliest dateTime. Any suggestions would be highly welcomed. Thanks in advance.   index=user_login_details | rex field=userLogin "(?<userName>\s+\d{5}).*" | dedup userName | eval Time=strftime(_time,"%Y-%m-%dT%H:%M:%S") | table userName, userLogin, Time | inputlookup user_details.csv append=true | dedup userName | outputlookup user_details.csv append=true