All Topics

Top

All Topics

I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T1... See more...
I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\}}" I need to extract value ABC123XYZ which is between account_id\":\" and \",\"activity. I tried the following query but it's not returning any data. index=prod_logs app_name="abc" | rex field=_raw "account_id\\\"\:\\\"(?<accid>[^\"]+)\\\"\,\\\"activity" | where isnotnull (accid) | table accid  
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:js... See more...
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:json When I search for events like below (sourcetype="snort3:alert:json") there is NOTHING But Splunk knows in that path there is something and in what number. Like below.   What I can tell more is what Splunk tells me when starting. Value in stanza [eventtype=snort3:alert:json] in /…/TA_Snort3_json/default/tags.conf, line 1 is not URL encoded: eventtype = snort3:alert:json Your indexes and inputs configurations are not internally consistenst. For more info, run ‘splunk btool check –debug’ Please, help..  
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  ... See more...
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  clustered  recently. It is almost 15 days, but still the replication factor and search factor are not met. What might be the reason and what are all the possible ways that I can resolve this. There are around 300 fixup tasks pending. The number remained the same for the past 2 weeks. I've manually rolled the buckets but still no use. 
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indic... See more...
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indicators tab under Administarion, but still SOAR created many indicators on fields that configured as disabled. 2. I see that enabling the indicators feature consuming all my free RAM memory, and I have a lot of RAM so I unserstand that there is a problem with this. anyone can say why and how to solve?
I want to separate events by date I want to isolate red highlights that have similar formats. I don't know how. I would appreciate it if you could tell me how.
Hi everyone, Is there a way to speed up the Splunk SOAR capabilities to process the events, it can't process a 100 events every 5 minutes....  I found a solution about the worker but, the file that ... See more...
Hi everyone, Is there a way to speed up the Splunk SOAR capabilities to process the events, it can't process a 100 events every 5 minutes....  I found a solution about the worker but, the file that solution talk about doesn't exists which is "umsgi.ini"
how to make a visualization report of the rap song I have written on a word doc. ??
is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particu... See more...
is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particular token value to something. is that possible
is there a condition for refreshing a dashboard. like if(dashboard refresh , 0 ,1)  
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: ... See more...
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: https://docs.splunk.com/Documentation/DSDL/5.1.2/User/TextClassAssistant What are the minimum requirements for CPU only Docker host machine in general when using this tool kit?   Thanks, MCW  
Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts.  Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)... See more...
Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts.  Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=AUTO disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%9QZ TIME_PREFIX=<TimeCreated SystemTime=' when I applied this sourcetype in raw windows, it work. but after I finished, it was one event raw windows #line-break  
Hi Team, How to write a calculated field for below  | eval action=case(like("request.path","auth/ldap/login/names"),"success") Names field will be changeing Above one is not working
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field... See more...
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field extractions have ceased to work.   The release notes says it compatible up to 7.3.6.1.  Is their an updated TA that we are not aware of? https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes Thanks.
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values ... See more...
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values for the field, which are 'Completed' and 'Started'. The timeframe between when a job is Completed and before the next Started event happens, there are no jobs running, so I need to create a new event called 'Not Running' to illustrate when there are no jobs running. However, the time between when a job is Started and a job is Completed needs to be called 'Running' because the time period between these two events is when there are jobs running. I need to visualize these events in a timechart. Example - there is a job that completes on 01/06/2024 at 17:00 (Completed). The next job starts on 01/06/2024 at 20:00 (Started). In this timeframe between 17:00 and 20:00 on 01/06/2024, it is in a state of 'Not Running'. I do not want to capture individual jobs. I want to capture all the jobs. The main values I want to illustrate in the timechart is when there are 'Not Running' and 'Running events so basically I want to illustrate the gaps between the 'Started' and 'Completed' events accordingly. I am stuck with this so it would be awesome if I can get some help for this. Thank you.
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I ... See more...
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I am unable to and I get the message: "This app restricts downloads to a defined list of users. Your user profile was not found in the list of authorized users." Is there a way around this? How can I get hold of the Splunk App for Windows Infrastructure 2.0.4? Kind regards, Jos  
Hi, this is probably a product related question. I have a requirement to monitor EDI files (834 - Enrolment file in Healthcare terms) end to end. I would like to see number of EDI files received, pro... See more...
Hi, this is probably a product related question. I have a requirement to monitor EDI files (834 - Enrolment file in Healthcare terms) end to end. I would like to see number of EDI files received, processed and saved, analyse the file processing failures. Which Splunk product(s) best suits my need?
Hello, I need help with the following scenario: Let's say I have a log source with browser traffic data, one of the available fields is malware_signature I made a lookup table to filter the results... See more...
Hello, I need help with the following scenario: Let's say I have a log source with browser traffic data, one of the available fields is malware_signature I made a lookup table to filter the results by 10 specific malwares I'd like to be alerted on, all 10 entries have wildcards like so, with another field called classification: malware_signature classification *mimikatz* high   when I use inputlookup to filter the results it works well, but no matter what I tried I can't get the "classification" field to be added works well for filtering: [| inputlookup malware_list.csv | fields malware_signature]   classification field won't show: [| inputlookup malware_list.csv | fields malware_signature classification]   Doesn't work: [| inputlookup malware_list.csv | fields malware_signature] | lookup malware_list.csv malware_signature OUTPUT classification     Clarification:  I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup command it won't catch the malware_signature with the wildcards.      
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find ... See more...
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find a REST input option in Splunk or any free add-on from Splunk for this task, all I have read over the internet is to develop a script. I need your support to share a sample Python script that should not only pull the logs but also avoid duplicate logs with every pull. Thanks in advance!
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, on... See more...
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, one for the management (network 205) and one for the remote logging location (splunk - network 203 -same as my udm network). - on my VM, ufw is running, I have opened port 9997 and port 514 . - on my UDM SE, I have forwarded the syslog to my remote splunk server (network 203). On the Splunk server, port 514 and 9997 are listening. Until now, no logs appear on my Splunk. How "ufw" is dealing when running two different networks ? How to add the second NIC (network 203) to Splunk ? Ideas ?  
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment.  All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH'... See more...
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment.  All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH's, cluster manager, deployment manager, 2 forwarders. Everything is "working" I just need to tune it now. The indexers are cranking out 700,000 logs per hour, and it's 90% coming off audit.log; the indexers processing the logs in and out of buckets. We have a requirement to monitor audit.log at large, but do not have a requirement for it to index what the buckets are doing. I've been looking at different approaches to this, but I would imagine I'm not the first person to encounter this. Would it be better to tune audit.rules from the linux side? Black list some keywords in the indexers inputs.conf? Tuning through props.conf? Would really appreciate some advice on this one. Thanks!