All Topics

Top

All Topics

There are three different events "input param" ,"sqs sent count", "total message published to SQS successfully" with the first event "input param"- I am trying to fetch different entity say mate... See more...
There are three different events "input param" ,"sqs sent count", "total message published to SQS successfully" with the first event "input param"- I am trying to fetch different entity say material/supplied material with the 2nd event "sqs sent count"-getting the total sqs sent count for that particular material or supplied material with the 3rd event "total message published to SQS successfully"-getting the count of total message published. Now i want then to publish in a single row for all those count displayed in table for a single objectype to get in one dashboard panel Then do a total counts for each columns and displays as a single row which will display in other panel of dashboard
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info o... See more...
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info object with some unique text in it. How do I configure the empty panel on the dashboard so that those specific logs containing unique text should be displayed in the panel for now on.
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals... See more...
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals labelfield =vuln   label=Total_scoreSum scoreSum | eval scoreSum_pct = scoreSum/Total_scoreSum*100 . "%" | table vuln, scoreSum, scoreSum_pct Result: vuln scoreSum scoreSum % vulnA 20   vulnB 40   vulnC 80   Total_scoreSum 140   Expected result vuln scoreSum scoreSum_pct vulnA 20 14.3% vulnB 40 28.6% vulnC 80 57.1% Total_scoreSum 140 100%
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk... See more...
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk Cloud.  Below is my configuration.  I'm sending some test logs from Postman but the logs don't arrive in Splunk Cloud.  I see the arrival of the logs in the OpenTelemetry Collector through the debug exporter.  I confirmed connectivity to the Splunk Heavy Forwarder by setting an invalid token which results in an authentication error.  Using a valid token doesn't result in any debug logs being recorded.  Any suggestions on troubleshooting? exporters:   debug:     verbosity: normal   splunk_hec:     token: "<valid token>"     endpoint: "https://splunkheavyforwarder.mydomain.local:8088/services/collector/event"     source: "oteltest"     sourcetype: "oteltest"     index: "<valid index>"     tls:       ca_file: "/etc/otel/config/certs/ca_bundle.cer"     telemetry:       enabled: true     health_check_enabled: true     heartbeat:       interval: 10s service:   pipelines:     logs:       receivers: [otlp]       processors: []       exporters: [splunk_hec, debug]     telemetry:       logs:         level: "debug"
  I have used the below query to get the total from that column Index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits"... Now how do i get o... See more...
  I have used the below query to get the total from that column Index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits"... Now how do i get only the last row which is the total to display in my dashboard.I tried using stats count but its not fetching the correct vaue      
I am getting getting extracted_host, extracted_source, extracted_sourcetype fields in interesting fields along with host, source, sourcetype in selected fields while ingesting logs using HEC input in... See more...
I am getting getting extracted_host, extracted_source, extracted_sourcetype fields in interesting fields along with host, source, sourcetype in selected fields while ingesting logs using HEC input in Splunk Cloud. Can someone help why I am gettin extracted_host, extracted_source, extracted_sourcetype fields in the logs even if they are not define in the source end.  
We are in the process of upgrading our 8.2.6 Splunk distributed environment to 9.1.0 I would like to ensure that all of our apps and add-ons are 9.1 compatible (and make sure this is no python 2 cod... See more...
We are in the process of upgrading our 8.2.6 Splunk distributed environment to 9.1.0 I would like to ensure that all of our apps and add-ons are 9.1 compatible (and make sure this is no python 2 code) - is there a SPL search i can run that can help me identify apps or add-ons that need to be upgraded? I have installed the readiness app, however it doesn't seem to capture all of our apps and add-ons in the environment.  
Index Size is 5.3G vs 1.6G Raw Data: Raw data  Index on Splunk   This is also affecting our licensing plans as well.  This is way bigger than anticipated.  I thought that 110% ro maybe ev... See more...
Index Size is 5.3G vs 1.6G Raw Data: Raw data  Index on Splunk   This is also affecting our licensing plans as well.  This is way bigger than anticipated.  I thought that 110% ro maybe even 180% but not 400%.  Somethings off.  
We followed the steps in https://docs.splunk.com/Documentation/DM/1.8.1/User/AWSAbout to onboard the data from a single AWS account. During the process onboarding data, AWS account details are input... See more...
We followed the steps in https://docs.splunk.com/Documentation/DM/1.8.1/User/AWSAbout to onboard the data from a single AWS account. During the process onboarding data, AWS account details are input using the UI, following which Splunk generates the Cloud Formation Template. This template has the a DM_ID, DM_Name and few indexes which Splunk generates. Does Splunk have API to script this? Our DevOps team wants to automate this process. ps: I was unable to find this in the API documentation.
Hello, I think this is a simple answer but I'm not able to find a solution.  I created a lookup table that looks like this (but of course has more info): Cidr, ip_address 24, 99.99.99.99/24 25... See more...
Hello, I think this is a simple answer but I'm not able to find a solution.  I created a lookup table that looks like this (but of course has more info): Cidr, ip_address 24, 99.99.99.99/24 25, 100.100.100/25 I only included the Cidr column as I read that the lookup table needs at least 2 columns, but I do not use it. Let me know if I should! I am trying to find source ips that match with the ip_address in my lookup table.    index="index1" [|inputlookup lookup | rename ip_address as src_ip] I have ensured that Advanced Settings -> Match -> CIDR(ip_address) When the query is ran, no matches are found, but I know that there is traffic from the addresses. What am I overlooking?
wanted to reach out for help regarding an issue we have been experiencing on one of our customers.  We build an app that exports events from a standalone customer using the Splunk Enterprise instance... See more...
wanted to reach out for help regarding an issue we have been experiencing on one of our customers.  We build an app that exports events from a standalone customer using the Splunk Enterprise instance.  We have that box gather the logs and hold them until it can be exported out of the box manually. We used the savedseaches.conf file to schedule a search query script (export.py) to pull events.  The problem is that on this particular customer he is only getting like 11 minutes worth of logs, but the file is scheduled to pull all index events from lets say 3:30pm-4:30pm, but the events start loading only from 4:19pm-4:30pm.   It does this across all times consistently. example, missing the first like 49 minutes of events:  4:19pm-430pm 5:19pm-5:30pm 6:19pm-6:30pm  We have a export.py script that goes out and gathers all index=* events according to the cron specified. savedsearches.conf cron_schedule = 30 */1 * * *  enablesched = 1 dispatch.ttl = 1800 allow_skew = 10m search = | export disable = 0  To compensate for lags, we build into the |export.py script to pull the events 1 hour prior so like.  This is part of the script dealing with the specific search. now = str(time.time()-3600).split(".")[0] query = "search index=* earliest=" + last_scan + "  lastest=" + now + "  once script is done, it creates a timestamp in a file of the now in epoch time, which is used for the next schedule time. Any help would be appreciated
I was able to find this search that gives me the number of users(IONS) who disconnected 10 or more times however it gives me the total based on time.  I would like to display a daily number for 30 da... See more...
I was able to find this search that gives me the number of users(IONS) who disconnected 10 or more times however it gives me the total based on time.  I would like to display a daily number for 30 days in a line chart.  For example Monday there were 10 users who disconnected over 10 time and so on for the rest of week. I can't seem to get the timechart to work with this: index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | stats count by Device IONS | where count >= 10 | appendpipe [|stats count as IONS | eval Device="Total"]
Hi,  I'll explain myself better with an example: I have the following values in radio (input): Name -> Value MB ->1024/1024 GB ->1024/1024/1024 (...) with the $token_name$ I can use the select... See more...
Hi,  I'll explain myself better with an example: I have the following values in radio (input): Name -> Value MB ->1024/1024 GB ->1024/1024/1024 (...) with the $token_name$ I can use the selected value in my search, but I would also like to use the name/label of the selected size to use in the chart legend. Is it possible to do this in Dashboard Studio? thanks in advance for your help  
Hello, I want to get Rspamd logs into Splunk with every info available. The best I could do with Rspamd config yields to this: 2023-11-03 13:02:24 #56502(rspamd_proxy) <7fcfc8>; lua; [string "retur... See more...
Hello, I want to get Rspamd logs into Splunk with every info available. The best I could do with Rspamd config yields to this: 2023-11-03 13:02:24 #56502(rspamd_proxy) <7fcfc8>; lua; [string "return function (t...:4: METATEST {"qid":"8BC8C2F741","user":"unknown","ip":"188.68.A.B","header_from":["foo bar via somelist <somelist@baz.org>"],"header_to":["list <somelist@baz.org>"],"header_subject":["proper subject"],"header_date":["Fri, 3 Nov 2023 08:00:43 -0400 (EDT)"],"scan_time":2457,"rcpt":["me@myself.net"],"size":6412,"score":-5.217652,"subject":"proper subject","action":"no action","message_id":"4SMK7v2HQTzJrP1@spike.bar.org","fuzzy":[],"rspamd_server":"rack.myself.net","from":"somelist-bounces@baz.org","symbols":[{"score":-0.500000,"group":"composite","groups":["composite"],"name":"RCVD_DKIM_ARC_DNSWL_MED"},{"score":0,"group":"headers","groups":["headers"],"name":"FROM_HAS_DN"},{"score":0,"group":"headers","options":["somelist@baz.org","somelist-bounces@baz.org"],"groups":["headers"],"name":"FROM_NEQ_ENVFROM"},{"score":-0.010000,"group":"headers","groups":["headers"],"name":"HAS_LIST_UNSUB"},{"score":0,"group":"headers","options":["somelist@baz.org"],"groups":["headers"],"name":"PREVIOUSLY_DELIVERED"},{"score":-1,"group":"abusix","options":["188.68.A.B:from"],"groups":["abusix","rbl"],"name":"RWL_AMI_LASTHOP"},{"score":-0.100000,"group":"mime_types","options":["text/plain"],"groups":["mime_types"],"name":"MIME_GOOD"},{"score":-0.200000,"group":"headers","options":["mailman"],"groups":["headers"],"name":"MAILLIST"},{"score":1,"group":"headers","groups":["headers"],"name":"SUBJECT_ENDS_QUESTION"},{"score":-0.200000,"group":"policies","options":["+ip4:188.68.A.B"],"groups":["policies","spf"],"name":"R_SPF_ALLOW"},{"score":-1,"group":"policies","options":["list.sys4.de:s=2023032101:i=1"],"groups":["policies","arc"],"name":"ARC_ALLOW"},{"score":0,"group":"ungrouped","options":["asn:19xxxx, ipnet:188.68.A.B/20, country:XY"],"groups":[],"name":"ASN"},{"score":0.100000,"group":"headers","groups":["headers"],"name":"RCVD_NO_TLS_LAST"},{"score":0,"group":"headers","groups":["headers","composite"],"name":"FORGED_RECIPIENTS_MAILLIST"},{"score":0,"group":"policies","options":["baz.org:+","bar.org:-"],"groups":["policies","dkim"],"name":"DKIM_TRACE"},{"score":0,"group":"headers","groups":["headers"],"name":"REPLYTO_DOM_NEQ_FROM_DOM"},{"score":0,"group":"policies","options":["bar.org:s=dktest"],"groups":["policies","dkim"],"name":"R_DKIM_REJECT"},{"score":-2.407652,"group":"statistics","options":["97.28%"],"groups":["statistics"],"name":"BAYES_HAM"},{"score":0,"group":"headers","groups":["headers"],"name":"TO_DN_ALL"},{"score":0,"group":"composite","groups":["composite"],"name":"DKIM_MIXED"},{"score":-0.200000,"group":"policies","options":["baz.org:s=20230217-rsa"],"groups":["policies","dkim"],"name":"R_DKIM_ALLOW"},{"score":0,"group":"headers","options":["3"],"groups":["headers"],"name":"RCVD_COUNT_THREE"},{"score":-0.600000,"group":"rbl","options":["188.68.A.B:from","188.68.A.B:received","168.100.A.B:received"],"groups":["rbl","dnswl"],"name":"RCVD_IN_DNSWL_MED"},{"score":-0.100000,"group":"rbl","options":["188.68.A.B:from"],"groups":["rbl","mailspike"],"name":"RWL_MAILSPIKE_GOOD"},{"score":0,"group":"policies","options":["baz.org"],"groups":["policies","dmarc"],"name":"DMARC_NA"},{"score":0,"group":"headers","options":["1"],"groups":["headers"],"name":"RCPT_COUNT_ONE"},{"score":0,"group":"mime_types","options":["0:+"],"groups":["mime_types"],"name":"MIME_TRACE"},{"score":0,"group":"headers","groups":["headers","composite"],"name":"FORGED_SENDER_MAILLIST"},{"score":0,"group":"headers","groups":["headers"],"name":"TO_EQ_FROM"},{"score":0,"group":"headers","options":["foo@bar.org"],"groups":["headers"],"name":"HAS_REPLYTO"}]}   Currently I’m extracting JSON with a props.conf & a transforms.conf: props.conf [rspamd] KV_MODE = json TRANSFORMS-json_extract_rspamd = json_extract_rspamd transforms.conf [json_extract_rspamd] SOURCE_KEY = _raw DEST_KEY = _raw LOOKAHEAD = 10000 #REGEX = ^([^{]+)({.+})$ REGEX = ^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d) (#\d+)\(([^)]+)\) ([^;]+); lua[^{]+{(.+})$ FORMAT = {"date":"$1","ida":"$2","process":"$3","idb":"$4",$5 CLONE_SOURCETYPE = _json I end up with this in splunk:                                     From here, I have 2 problems. 1st problem: contrary to native JSON (from my Amavis logs for example), Splunk does not extract nor process basic stats about fields unless I’m explicitly extract them… That’s quite a pain. Is there a way / config setting to instruct Splunk to automagically extract every fields? 2nd problem: this JSON is crap. Every object in "symbols[]" looks like this: It’s almost unuseable as it prevent me from linking the name of the symbol to its score and to its options. Is there a parsing option / function I could use to reliably transform this into something I can work with? A good result could be turning { group: abusix groups: [ abusix rbl ] name: RWL_AMI_LASTHOP options: [ A.B.C.D:from ] score: -1 } into RWL_AMI_LASTHOP: [ group: abusix groups: [ abusix rbl ] name: RWL_AMI_LASTHOP options: [ A.B.C.D:from ] score: -1 ]   I’m open to suggestions, I’ve been working for years with the great JSON logs of Amavis (perfect parsing and usability). This problem is new to me…
Hi All, I want to create an SPL query that first returns data by matching the destination IP address from Palo Alto logs. Then, according to the destination IP, it will resolve the actual destinatio... See more...
Hi All, I want to create an SPL query that first returns data by matching the destination IP address from Palo Alto logs. Then, according to the destination IP, it will resolve the actual destination hostname from Symantec logs and Windows Event logs in separate fields. I was able to match the destination IP (dest_ip) from Palo Alto logs with Symantec logs and return the hostname (if available) from it. However, I am struggling to do the same by joining Windows logs to return the values, which should be equal to the hostname in Symantec logs. Can someone kindly assist me in fixing this code to retrieve the expected results?       index=*-palo threat="SMB: User Password Brute Force Attempt(40004)" src=* dest_port=445 | eval dest_ip=tostring(dest) | join type=left dest_ip [ search index=*-sep device_ip=* | eval dest_ip=tostring(device_ip) | stats count by dest_ip user_name device_name ] | eval dest_ip=tostring(dest) | join type=left dest_ip [ search index="*wineventlog" src_ip=* | eval dest_ip=tostring(src_ip) | eval username=tostring(user) | stats count by dest_ip username ComputerName ] | table future_use3 src_ip dest_ip dest_port user device_name user_name rule threat repeat_count action ComputerName username | sort src_ip | rename future_use3 AS "Date/Time" src_ip AS "Source IP" dest_ip AS "Destination IP" user AS "Palo Detected User" user_name AS "Symantec Detected User @ Destination" device_name AS "Symantec Destination Node" rule AS "Firewall Rule" threat as "Threat Detected" action as "Action" repeat_count AS "Repeated Times"       @eve
Hello, I'm currently trying to convert some mixed-text events into JSON. The log file is made of some pure text log lines and some other lines that start with plain text and end with some JSON. I ha... See more...
Hello, I'm currently trying to convert some mixed-text events into JSON. The log file is made of some pure text log lines and some other lines that start with plain text and end with some JSON. I have created a transforms.conf rule to extract the JSON and to clone the event into _json sourcetype: [json_extract_rspamd] SOURCE_KEY = _raw DEST_KEY = _raw LOOKAHEAD = 10000 #REGEX = ^([^{]+)({.+})$ REGEX = ^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d) (#\d+)\(([^)]+)\) ([^;]+); lua[^{]+{(.+})$ FORMAT = {"date":"$1","ida":"$2","process":"$3","idb":"$4",$5 CLONE_SOURCETYPE = _json   This is working but unfortunately it will also clone every events from that log file. is there a way to trigger the CLONE_SOURCETYPE only when the REGEX is matched?
Hi i have log line like this, 1-need to group by them by ID, 2- filter those transactions that has T[A]   #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ... See more...
Hi i have log line like this, 1-need to group by them by ID, 2- filter those transactions that has T[A]   #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 moduleE ID[987]T[B] 16:30:59:002 moduleF ID[987] 16:30:60:002 moduleZ ID[987]   Any idea? Thanks
Hi All, After restarting Splunk on my dev server I am getting the below error.  
I am very new to SPLUNK and practicing using the botsv1 index. I need to use a "Wild Card" to find all the passwords used against a destination IP. I know I need to use the http_method= Post and se... See more...
I am very new to SPLUNK and practicing using the botsv1 index. I need to use a "Wild Card" to find all the passwords used against a destination IP. I know I need to use the http_method= Post and search for the user passwords within the form_data field. I have been experimenting with the SPL command but no success as of yet
This would be a piece of cake for someone who uses SPLUNK. I am  doing a search using the 'stats', çount' and sort commands in the botsv1 index. I am to find the top ten URI's in ascending order. W... See more...
This would be a piece of cake for someone who uses SPLUNK. I am  doing a search using the 'stats', çount' and sort commands in the botsv1 index. I am to find the top ten URI's in ascending order. What is the SPL command?