All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am very new to Splunk but I just encountered the explanation for this in a course When no Dataset is specified in the From clause, Splunk assumes the first root Dataset is addressed. When you ... See more...
I am very new to Splunk but I just encountered the explanation for this in a course When no Dataset is specified in the From clause, Splunk assumes the first root Dataset is addressed. When you want to address any root Dataset other than the first one, you must specify it explicitly. Therefore, it is best practice to ignore the fact that Splunk assumes the first root Dataset and specify it in every use even if Splunk allows you to save that little bit of typing     | tstats summariesonly=t count FROM datamodel=model_name.dataset_1 where nodename=dataset_1 by dataset_1.FieldName      
Hi @gcusello   Yes want alert for online status="OFFLINE" and online status="Online"  for the same device
Hi @parthiban, please confirm: you want an alert if onlineStatus="recovery" or if, for a defined period, you don't receive logs from a device is is correct? In this case, you can use my second sear... See more...
Hi @parthiban, please confirm: you want an alert if onlineStatus="recovery" or if, for a defined period, you don't receive logs from a device is is correct? In this case, you can use my second search creating a list of devices to monitor in a lookup. Ciao. Giuseppe
We have the same issue.  Any news on this how to fix?
Hi @gcusello  In the log, we receive the payload model below. In the 'entities' section, I've only specified one device status, but in reality, there are 11 device statuses in a single log messa... See more...
Hi @gcusello  In the log, we receive the payload model below. In the 'entities' section, I've only specified one device status, but in reality, there are 11 device statuses in a single log message. I want to create an alert: if a device goes offline, it will trigger one alert, and when it comes online, it will trigger a clear alarm alert. I specify having only one alert because we receive logs every 2 minutes from AWS, and to avoid multiple alerts for the same device going offline and online..  Hope it is clear what my requirement is. response_details: ▼{ response_payload:▼ { entities: ▼{ id:"YYYYYYY", name:"ABC", onlineStatus:"ONLINE", serialNumber:"XXXXXXX", },
Hi @jhooper33 , don't use the search command: put all the search terms in the main search, so you'll have a faster search: index=pan_logs [ inputlookup url_intel.csv | fields ioc | rename ioc AS de... See more...
Hi @jhooper33 , don't use the search command: put all the search terms in the main search, so you'll have a faster search: index=pan_logs [ inputlookup url_intel.csv | fields ioc | rename ioc AS dest_url] NOT [| inputlookup whitelist.csv WHERE category=website | fields ignoreitem | rename ignoreitem as query ] NOT ("drop" OR "denied" OR "deny" OR "reset" OR "block") | eval Sensor_Name="Customer", Signature="URL Intel Hits", user=if(isnull(user),"-",user), src_ip=if(isnull(src_ip),"-",src_ip), dest_ip=if(isnull(dest_ip),"-",dest_ip), event_criticality="Medium" | rename _raw AS Raw_Event | table _time event_criticality Sensor_Name Signature user src_ip dest_ip Raw_Event Ciao. Giuseppe
@gcusello  Apologies for the late response, got the ok to send the search today. The url_intel.csv is what has 66,317 lines. I just ran this alert and it didn't give this regex error, so it is inter... See more...
@gcusello  Apologies for the late response, got the ok to send the search today. The url_intel.csv is what has 66,317 lines. I just ran this alert and it didn't give this regex error, so it is intermittent when it will give an error at all. index=pan_logs [ inputlookup url_intel.csv | fields ioc | rename ioc AS dest_url] | search NOT [| inputlookup whitelist.csv | search category=website | fields ignoreitem | rename ignoreitem as query ] | search NOT ("drop" OR "denied" OR "deny" OR "reset" OR "block") | eval Sensor_Name="Customer", Signature="URL Intel Hits", user=if(isnull(user),"-",user), src_ip=if(isnull(src_ip),"-",src_ip),dest_ip=if(isnull(dest_ip),"-",dest_ip), event_criticality="Medium" | rename _raw AS Raw_Event | table _time,event_criticality,Sensor_Name,Signature,user,src_ip,dest_ip,Raw_Event
Hi @nehamvinchankar, this seems to be a json log, so you could use the INDEXED_EXTRACTIONS=true in the sourcetype or the spath command. In addition, if you want to use a regex, you can use this: |... See more...
Hi @nehamvinchankar, this seems to be a json log, so you could use the INDEXED_EXTRACTIONS=true in the sourcetype or the spath command. In addition, if you want to use a regex, you can use this: | rex "(?ms)\"API_NAME\": \"(?<API_NAME>[^\"]+)\",\n\"DEP_DATE\": \"(?<DEP_DATE>[^\"]+)\"" that you can test at https://regex101.com/r/cPQ2By/1 Ciao. Giuseppe
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2... See more...
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2023-12-08T00:00:00" }, { "API_NAME": "mcbhsa", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "owbaha", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "pdjna7aha", "DEP_DATE": "2023-11-20T00:00:00" } ]     I want to extrcat dep_date and apiname in separate row DEP_DATE API_NAME 2023-12-08T00:00:00 wurfbdjd   mcbhsa  
Hi @abhi04 , I usually use the solution with SEDCMD hinted by @richgalloway . Anyway, also the solution with props amd transforms should run: props.conf [source::abc] TRANSFORMS-anonymize = abc-a... See more...
Hi @abhi04 , I usually use the solution with SEDCMD hinted by @richgalloway . Anyway, also the solution with props amd transforms should run: props.conf [source::abc] TRANSFORMS-anonymize = abc-anonymizer transforms.conf [abc-anonymizer] DEST_KEY = _raw REGEX = Verification Code:\d+ FORMAT = Verification Code:###### Ciao. Giuseppe
Hi @aaronbarry73, In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder. Anyway, open a case to Splunk Support to better understand this behavior. ... See more...
Hi @aaronbarry73, In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder. Anyway, open a case to Splunk Support to better understand this behavior. Ciao. Giuseppe
Hi @parthiban , let me understand, you have logs from this remote device. in these logs there's a status fields, in which there can be the "recovery" value; then you want to monitor if the device i... See more...
Hi @parthiban , let me understand, you have logs from this remote device. in these logs there's a status fields, in which there can be the "recovery" value; then you want to monitor if the device is up and running sending logs, is it correct? If this is your requirement, please try something like this: index=your_index device=your_device | stats count BY status | append [ | makeresults | eval device=your_device, count=0 | fields device count ] | stats sum(count) AS total BY status | eval status=if(total=0,"down",status) | search status="recovery" OR status="down" | table status if you have more devices to monitor you can put them in a lookup (called e.g. perimeter.csv), containing at least one column (device) and run something like this: index=your_index | stats count BY device status | append [ | inputlookup perimeter.csv | eval count=0 | fields device count ] | stats sum(count) AS total BY device status | eval status=if(total=0,"down",status) | search status="recovery" OR status="down" | table device status Ciao. Giuseppe
Hi, can you post a sample code of your solution? I can't figure it out.
Hi @nithys , you have to put the NOT operatore befor the field, not before IN: index = Index1 source IN ("source 1","source 2","source 3","source 4") NOT source IN ("source 4","source 5","source 3"... See more...
Hi @nithys , you have to put the NOT operatore befor the field, not before IN: index = Index1 source IN ("source 1","source 2","source 3","source 4") NOT source IN ("source 4","source 5","source 3","source 6") Anyway, only for my curiosity: the field source should be unique, so the first (inclusive) condition should be sufficient and the second (exclusive) condition shouldn't be mandatory. Ciao. Giuseppe
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command. ... See more...
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command.   the error message is :   Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found.     what should I do to solve this problem?
Thank you so much for following through with this issue. the installation completed successfully!
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Sou... See more...
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Source = source 6 Source = source 7 Now i have a requirement to create an alert search with only first 4 source and exclude the remaining three source 5,6,7 I tried using below query Index = Index1 source IN ("source 1","source 2","source 3","source 4") when i tried to exclude 4,5,6 source,getting error.Can you help on this? Index = Index1 source IN ("source 1","source 2","source 3","source 4") source  NOT IN ("source 4","source 5","source 3","source 6") or Index = Index1 source ! IN ("source 4","source 5","source 6") source IN ("source 1","source 2","source 3","source 4") source ! IN ("source 4","source 5","source 3","source 6")
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I... See more...
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I can set an alert for the offline status, I'm a bit confused about including the recovery status. Can you please assist me in configuring the alert for both scenarios?
Why use both props AND transforms when you can do it with just props? [source::abc] SEDCMD-anonymizer = s/Verification Code: \d+"/Verification Code: ######/g
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs repli... See more...
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs replicate fine across the members and I am running several inputs.  It appears that all of the inputs so far are running on the captain only.  I am wondering if this is normal behavior, and if the captain will start distributing input jobs to other members once it is maxed out? I am running this search to see the input jobs: index=_internal sourcetype=dbx_job_metrics connection=* host IN (abclx1001,abclx1002,abclx1003) | table _time host connection input_name db_read_time status start_time end_time duration read_count write_count error_count | sort - _time All inputs are successful, and the host field is always the same - it is the captain. The other members give me messages like this: 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.045278310775756836 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 41 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.04212641716003418 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 38 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request Thoughts?  Is the shc supposed to distributed these inputs the way it would distribute scheduled searches?