All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  Apologies for the late response, got the ok to send the search today. The url_intel.csv is what has 66,317 lines. I just ran this alert and it didn't give this regex error, so it is inter... See more...
@gcusello  Apologies for the late response, got the ok to send the search today. The url_intel.csv is what has 66,317 lines. I just ran this alert and it didn't give this regex error, so it is intermittent when it will give an error at all. index=pan_logs [ inputlookup url_intel.csv | fields ioc | rename ioc AS dest_url] | search NOT [| inputlookup whitelist.csv | search category=website | fields ignoreitem | rename ignoreitem as query ] | search NOT ("drop" OR "denied" OR "deny" OR "reset" OR "block") | eval Sensor_Name="Customer", Signature="URL Intel Hits", user=if(isnull(user),"-",user), src_ip=if(isnull(src_ip),"-",src_ip),dest_ip=if(isnull(dest_ip),"-",dest_ip), event_criticality="Medium" | rename _raw AS Raw_Event | table _time,event_criticality,Sensor_Name,Signature,user,src_ip,dest_ip,Raw_Event
Hi @nehamvinchankar, this seems to be a json log, so you could use the INDEXED_EXTRACTIONS=true in the sourcetype or the spath command. In addition, if you want to use a regex, you can use this: |... See more...
Hi @nehamvinchankar, this seems to be a json log, so you could use the INDEXED_EXTRACTIONS=true in the sourcetype or the spath command. In addition, if you want to use a regex, you can use this: | rex "(?ms)\"API_NAME\": \"(?<API_NAME>[^\"]+)\",\n\"DEP_DATE\": \"(?<DEP_DATE>[^\"]+)\"" that you can test at https://regex101.com/r/cPQ2By/1 Ciao. Giuseppe
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2... See more...
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2023-12-08T00:00:00" }, { "API_NAME": "mcbhsa", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "owbaha", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "pdjna7aha", "DEP_DATE": "2023-11-20T00:00:00" } ]     I want to extrcat dep_date and apiname in separate row DEP_DATE API_NAME 2023-12-08T00:00:00 wurfbdjd   mcbhsa  
Hi @abhi04 , I usually use the solution with SEDCMD hinted by @richgalloway . Anyway, also the solution with props amd transforms should run: props.conf [source::abc] TRANSFORMS-anonymize = abc-a... See more...
Hi @abhi04 , I usually use the solution with SEDCMD hinted by @richgalloway . Anyway, also the solution with props amd transforms should run: props.conf [source::abc] TRANSFORMS-anonymize = abc-anonymizer transforms.conf [abc-anonymizer] DEST_KEY = _raw REGEX = Verification Code:\d+ FORMAT = Verification Code:###### Ciao. Giuseppe
Hi @aaronbarry73, In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder. Anyway, open a case to Splunk Support to better understand this behavior. ... See more...
Hi @aaronbarry73, In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder. Anyway, open a case to Splunk Support to better understand this behavior. Ciao. Giuseppe
Hi @parthiban , let me understand, you have logs from this remote device. in these logs there's a status fields, in which there can be the "recovery" value; then you want to monitor if the device i... See more...
Hi @parthiban , let me understand, you have logs from this remote device. in these logs there's a status fields, in which there can be the "recovery" value; then you want to monitor if the device is up and running sending logs, is it correct? If this is your requirement, please try something like this: index=your_index device=your_device | stats count BY status | append [ | makeresults | eval device=your_device, count=0 | fields device count ] | stats sum(count) AS total BY status | eval status=if(total=0,"down",status) | search status="recovery" OR status="down" | table status if you have more devices to monitor you can put them in a lookup (called e.g. perimeter.csv), containing at least one column (device) and run something like this: index=your_index | stats count BY device status | append [ | inputlookup perimeter.csv | eval count=0 | fields device count ] | stats sum(count) AS total BY device status | eval status=if(total=0,"down",status) | search status="recovery" OR status="down" | table device status Ciao. Giuseppe
Hi, can you post a sample code of your solution? I can't figure it out.
Hi @nithys , you have to put the NOT operatore befor the field, not before IN: index = Index1 source IN ("source 1","source 2","source 3","source 4") NOT source IN ("source 4","source 5","source 3"... See more...
Hi @nithys , you have to put the NOT operatore befor the field, not before IN: index = Index1 source IN ("source 1","source 2","source 3","source 4") NOT source IN ("source 4","source 5","source 3","source 6") Anyway, only for my curiosity: the field source should be unique, so the first (inclusive) condition should be sufficient and the second (exclusive) condition shouldn't be mandatory. Ciao. Giuseppe
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command. ... See more...
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command.   the error message is :   Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found.     what should I do to solve this problem?
Thank you so much for following through with this issue. the installation completed successfully!
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Sou... See more...
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Source = source 6 Source = source 7 Now i have a requirement to create an alert search with only first 4 source and exclude the remaining three source 5,6,7 I tried using below query Index = Index1 source IN ("source 1","source 2","source 3","source 4") when i tried to exclude 4,5,6 source,getting error.Can you help on this? Index = Index1 source IN ("source 1","source 2","source 3","source 4") source  NOT IN ("source 4","source 5","source 3","source 6") or Index = Index1 source ! IN ("source 4","source 5","source 6") source IN ("source 1","source 2","source 3","source 4") source ! IN ("source 4","source 5","source 3","source 6")
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I... See more...
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I can set an alert for the offline status, I'm a bit confused about including the recovery status. Can you please assist me in configuring the alert for both scenarios?
Why use both props AND transforms when you can do it with just props? [source::abc] SEDCMD-anonymizer = s/Verification Code: \d+"/Verification Code: ######/g
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs repli... See more...
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs replicate fine across the members and I am running several inputs.  It appears that all of the inputs so far are running on the captain only.  I am wondering if this is normal behavior, and if the captain will start distributing input jobs to other members once it is maxed out? I am running this search to see the input jobs: index=_internal sourcetype=dbx_job_metrics connection=* host IN (abclx1001,abclx1002,abclx1003) | table _time host connection input_name db_read_time status start_time end_time duration read_count write_count error_count | sort - _time All inputs are successful, and the host field is always the same - it is the captain. The other members give me messages like this: 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.045278310775756836 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 41 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.04212641716003418 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 38 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request Thoughts?  Is the shc supposed to distributed these inputs the way it would distribute scheduled searches?
How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search... See more...
How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search spl regex is different than regex in transforms props.conf [source::abc] TRANSFORMS-anonymize = abc-anonymizer transforms.conf [abc-anonymizer] DEST_KEY = _raw REGEX =  FORMAT = $1######$2        
This seems to technically work, however I am left with an unwanted "count" column at the end that I don't know how to remove. As an example of what I'm after, I've included the "Target Output" below.... See more...
This seems to technically work, however I am left with an unwanted "count" column at the end that I don't know how to remove. As an example of what I'm after, I've included the "Target Output" below.  Actual Output: Target Output:  
Hello, Are there any recommendations on installation or configurations "Add-on for SharePoint API with AWS Integration". Any help will be highly appreciated.  Add-on for SharePoint API with AWS Int... See more...
Hello, Are there any recommendations on installation or configurations "Add-on for SharePoint API with AWS Integration". Any help will be highly appreciated.  Add-on for SharePoint API with AWS Integration | Splunkbase
Adding to @richgalloway 's answer - every cluster has exactly one active CM (even a multisite cluster). I can never recall the exact numbers but it scales to a range of millions buckets in your clust... See more...
Adding to @richgalloway 's answer - every cluster has exactly one active CM (even a multisite cluster). I can never recall the exact numbers but it scales to a range of millions buckets in your cluster (combined across all your indexes). The main question is why are you asking this particular thing. What issue are you trying to resolve?
1. Did you check splunk list monitor and splunk list inputstatus 2. This might not be related but batch input does not have crcSalt parameter (it makes no sense in batch input context at all). 3... See more...
1. Did you check splunk list monitor and splunk list inputstatus 2. This might not be related but batch input does not have crcSalt parameter (it makes no sense in batch input context at all). 3. Ok, so you have two separate file inputs covering the same path? That might be the problem.
That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a ... See more...
That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a crime. What ingenious piece of equipment is that if you can share this with us?