All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is trigge... See more...
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is triggered for three consecutive days, the alarm is triggered. The current SPL: index="xx" | bin _time span=15m | stats dc(dest_port) as dc_ports by _time src_ip dest_ip | where dc_ports > 10 | streamstats count as consecutive_triggers by src_ip dest_ip reset_on_change=Ture | where consecutive_triggers>=5   Next, I don't know how to query the trigger for the same period for three consecutive days.
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying ... See more...
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying to resolve the CN mentioned in the certificate?     09-11-2023 11:40:01.284 +0300 WARN X509Verify [1034989 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.285 +0300 WARN X509Verify [1034990 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.286 +0300 WARN X509Verify [1034986 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034786 DistributedPeerMonitorThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:04.005 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch"     Cheers.
Hi Splunk Folks, I experience strange behavior after upgrading from 7.0.2 to 7.1.1. After the change we noticed that multiple Correlation Searches normally accessible from Content Management have b... See more...
Hi Splunk Folks, I experience strange behavior after upgrading from 7.0.2 to 7.1.1. After the change we noticed that multiple Correlation Searches normally accessible from Content Management have been somehow "transformed" into Saved Searches and stopped being scheduled. Also many of them when expanding shows Error "No associated objects or datasets found." Additionally, correlation searches which remained in normal shape are opening the CS edit menu like expected, while these which "transformed" to Saved Searches are taking me to Search, reports, and alerts menu. In savedsearches.conf we indeed see all which are corrupted are missing "action.correlationsearch.enabled" and other fields related to correlation searches.   Any ideas what might have happened? Have not found similar issues desribed over the web.
Hi  I am uploading a .csv file to a metric, however, Spunk is changing a "." to a "_" when I use the metrics. Any idea why this is happening? When sending data in via HEC this is not happening. ... See more...
Hi  I am uploading a .csv file to a metric, however, Spunk is changing a "." to a "_" when I use the metrics. Any idea why this is happening? When sending data in via HEC this is not happening.   any help would be great - Cheers   
Hi, My company has recently gone through a re-branding (name change). I was wondering how I should go about changing the Splunkbase company display name (and everything related to that). I trie... See more...
Hi, My company has recently gone through a re-branding (name change). I was wondering how I should go about changing the Splunkbase company display name (and everything related to that). I tried contacting support but have been ghosted for the past month. Any way I can do this myself? Thanks!
Hello all, I am still relatively new to the topic of Splunk and SPL. To show the maximum uptime per day of four hosts in a bar chart, I wrote the following query: sourcetype=datacollection VMBT02... See more...
Hello all, I am still relatively new to the topic of Splunk and SPL. To show the maximum uptime per day of four hosts in a bar chart, I wrote the following query: sourcetype=datacollection VMBT02 OR slaznocaasevm01 OR VMMS01 OR slaznocaasmon01 | rex "Uptime: (?<hours>\d+) hours (?<minutes>\d+) minutes" | rex "Uptime: (?<minutes>\d+) minutes (?<seconds>\d+) seconds" | eval hours = coalesce(hours, 0) | eval minutes = coalesce(minutes, 0) | eval seconds = coalesce(seconds, 0) | eval uptime_decimal = if(minutes > 0 AND seconds > 0, minutes/1000 * 10, hours*1 + minutes/100) | eval formatted_uptime = round(uptime_decimal, 2) | where extracted_hostname IN ("VMBT02", "slaznocaasevm01", "VMMS01", "slaznocaasmon01") | stats max(formatted_uptime) as Uptime by extracted_Hostname I extracted the field "extracted_hostname" via the GUI before. There are also no events here that do not match the regex. Afterwards, the chart is also displayed correctly. However, the field extraction does not work from the 01st to about the 10th of a month. Instead of the hostnames, other data fragments from the first line of an event are taken here. I can't see a pattern here either. Does anyone know where the causes for the incorrect extraction are? Is perhaps my query incorrect? I hope someone can help me to solve this problem. Thanks in advance Many greetings
Hi All ,  We are trying to use the CyberArk Identity functionality in DB connect add-on . we created the safe in cyberArk and firewall is open .When we tried to create new cyberArk identity from ... See more...
Hi All ,  We are trying to use the CyberArk Identity functionality in DB connect add-on . we created the safe in cyberArk and firewall is open .When we tried to create new cyberArk identity from UI (i.e  DB connect - > configuration - > identities -> NewCyberARk ) we are gettting the below error message  "com.splunk.dbx.server.cyberark.exceptions.CredentialsRequestException: Error has occurred while getting password from CyberArk." 2023-09-1118:55:24.739 +1000 [dw-28-POST/api/identities] ERRORc.s.dbx.server.cyberark.runner.CyberArkAccessImpl-UnsuccessfulresponsefromCyberArkwitherror <!DOCTYPEhtmlPUBLIC "-//W3C//DTDXHTML1.0Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <htmlxmlns="http://www.w3.org/1999/xhtml"> <head> <metahttp-equiv="Content-Type" content="text/html; charset=iso-8859-1"/> <title>403-Forbidden:Accessisdenied.</title> <styletype="text/css"> <!-- body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;} fieldset{padding:015px10px15px;} h1{font-size:2.4em;margin:0;color:#FFF;} h2{font-size:1.7em;margin:0;color:#CC0000;} h3{font-size:1.2em;margin:10px000;color:#000000;} #header{width:96%;margin:0000;padding:6px2%6px2%;font-family:"trebuchetMS", Verdana, sans-serif;color:#FFF; background-color:#555555;} #content{margin:0002%;position:relative;} .content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;} --> </style> </head> <body> <divid="header"><h1>ServerError</h1></div> <divid="content"> <divclass="content-container"><fieldset> <h2>403-Forbidden:Accessisdenied.</h2> <h3>Youdonothavepermissiontoviewthisdirectoryorpageusingthecredentialsthatyousupplied.</h3> </fieldset></div> </div> </body> </html> when we tried via curl command we can able to fetch password from cyberark without  any error . Can you please share how to add the certificate details in CyberArk identity  . In the splunk documentation it just enter the public certificate text .
  Hello Splunkers!! As pe the attached screenshot I want to hide values from sep 2022 to july 2023, because those period have a null values. So I want to showcase graph only with values.  
Hello,   We have a few updates we have made to the Splunk_TA_AWS and a few more ideas we want to implement, but are finding it harder and harder when upgrades come out to migrate and merge those ... See more...
Hello,   We have a few updates we have made to the Splunk_TA_AWS and a few more ideas we want to implement, but are finding it harder and harder when upgrades come out to migrate and merge those changes. Is there anyway to contribute to the Splunk_TA_AWS add-on via some pull requests... I found this github page, but it is 8 years old, and no PRs or issues etc etc (and is way behind the current 7.x version) https://github.com/splunk-apps/splunk-aws-addon   Thanks
We have Splunk Add-on for Unix and Linux 8.2.0 installed and need to upgrade it to the latest version (8.10.0). Request someone to help if I can directly upgrade it to 8.10 or should there be an incr... See more...
We have Splunk Add-on for Unix and Linux 8.2.0 installed and need to upgrade it to the latest version (8.10.0). Request someone to help if I can directly upgrade it to 8.10 or should there be an incremental upgrade. IS there any feature that will be affected in my existing set-up due to the upgrade. Also, what are the steps that should be taken while I perform this so as to not lose any of my existing configs. Is there any documentation for this.
Hi Splunkers Need some help with a timechart query please. index=linux host IN (a,b,c,d,e) | timechart span=1week eval(avg(CPU) * avg(MEM)) BY host This works well if there is atleast an event ... See more...
Hi Splunkers Need some help with a timechart query please. index=linux host IN (a,b,c,d,e) | timechart span=1week eval(avg(CPU) * avg(MEM)) BY host This works well if there is atleast an event per host. But I wanted to show zero value when there are no events for a particular host. Is that possible? eg: I have events only for a,b,c but still wanted to show zero for d and e hosts. 
I have asset management data that i need to create weekly reports. When i make query for the data like query below: index=a sourcetype=b stats values(ip_addr) as ip by hostname Result: host... See more...
I have asset management data that i need to create weekly reports. When i make query for the data like query below: index=a sourcetype=b stats values(ip_addr) as ip by hostname Result: hostname        ip Host A            1) 10.0.0.0                           2) 10.10.10.1                           3) 10.0.0.2 Host B            1) 192.1.1.1                           2) 172.1.1.1 i wanted the result not include the numbering in front of the ip address. Please assist on this. Thank you.
Hi i am kinda new to Splunk and I'm having this trouble  `A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py"... See more...
Hi i am kinda new to Splunk and I'm having this trouble  `A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py"` I have 1 Master sever(Cluster Master, SHC Deployer, License Master), 3 search heads(clustered) 3 indexers(clustered) 1 heavy forwarder i've run below command that I found on a web ``` | rest /services/admin/inputstatus/ModularInputs:modular%20input%20commands splunk_server=local count=0 | append [| rest /services/admin/inputstatus/ExecProcessor:exec%20commands splunk_server=local count=0] | fields inputs* | transpose | rex field=column "inputs(?<script>\S+)(?:\s\((?<stanza>[^\(]+)\))?\.(?<key>(exit status description)|(time closed)|(time opened))" | eval value=coalesce('row 1', 'row 2'), stanza=coalesce(stanza, "default"), started=if(key=="time opened", value, started), stopped=if(key=="time closed", value, stopped) | rex field=value "exited\s+with\s+code\s+(?<exit_status>\d+)" | stats first(started) as started, first(stopped) as stopped, first(exit_status) as exit_status by script, stanza | eval errmsg=case(exit_status=="0", null(), isnotnull(exit_status), "A script exited abnormally with exit status: "+exit_status, isnull(started) or isnotnull(stopped), "A script is in an unknown state"), ignore=if(`script_error_msg_ignore`, 1, 0) ``` and I got this result  exit_status 1 and 114 how do i get rid of this errors?  Thank you in advance.
Hi all, I have CSV files  (they are exports from the Garmin R10 launch monitor session data via the Garmin Golf app) that contain 2 header lines, the first header line contains the field names and t... See more...
Hi all, I have CSV files  (they are exports from the Garmin R10 launch monitor session data via the Garmin Golf app) that contain 2 header lines, the first header line contains the field names and the second header line contains the unit of measurement (or blank if not applicable) For example: Date,Player,Club Name,Club Type,Club Speed,Attack Angle,... ,,,,[mph],[deg],... 09/10/23 10:00:45 AM,Johan,7 Iron,7 Iron,70.30108663634557,-7.360383987426758,...   Now, I would like to index the data in one of 2 ways: Add the unit of measurement to the value so that would become "70.30108663634557mph" for the Club Speed field Add an additional column that contains the unit of measurement Add column "Club Speed UOM" with value mph for every line indexed from the CSV file and do this for every column that contains a valid unit of measurement For me, option 2 would be the preferred option A third option, would be to skip the unit of measurement line altogether but I would rather not use this option.   I would appreciate any help that points me in the right direction to solve this challenge.   Thanx in advance.
Hi, I didn't find detailed info, how to connect Universal Forwarders to Monitoring Console. In our organization there is no deployment server, but we do want to monitor Splunk UF/HF with monitori... See more...
Hi, I didn't find detailed info, how to connect Universal Forwarders to Monitoring Console. In our organization there is no deployment server, but we do want to monitor Splunk UF/HF with monitoring console, so the info can be seen on MC > Forwarders > Forwarders: Deployment What are the steps on the UF side to configure this. Thanks
Hi,   We wonder how to monitor the smbV1 access in a domain.   We are already enabled the eventcode 3000 log on windows log.   Now we want to know who use smbV1 to access on every host:   to ... See more...
Hi,   We wonder how to monitor the smbV1 access in a domain.   We are already enabled the eventcode 3000 log on windows log.   Now we want to know who use smbV1 to access on every host:   to start we use this request:       index=windows EventCode=3000 source="WinEventLog:Microsoft-Windows-SMBServer/Audit"         but now we want to display in a table / stats ... foreach host each computers / users access to them.     Could you help us please
I am learning splunk for the first time in my course, I had this task of setting up 4 VMs through VMware workstation , 1 being controller a Centos GUI, and the other 3 being agents centos CLI. I went... See more...
I am learning splunk for the first time in my course, I had this task of setting up 4 VMs through VMware workstation , 1 being controller a Centos GUI, and the other 3 being agents centos CLI. I went through the configuration of the VMs they all ping each other fine. I SSH the splunk onto the 4 VMs using mobaxterms. After creating the 9997 port on the controller and saving the port I configured each agent to have their agents ip address forward to the port of my controller. After going through my lab at the last part I had to type in an input Index=”main” host=* | table host | dedup host this had no results I was told if nothing popped up I would to troubleshoot by rebooting my vm and my host system but that didn't fix it would love some insights
I have installed the splunk forwarder on a Windows 10 VM and have splunk installed on a Debian VM. I have restarted the splunk forwarder on the Win10 VM but when i log into splunk enterprise on the D... See more...
I have installed the splunk forwarder on a Windows 10 VM and have splunk installed on a Debian VM. I have restarted the splunk forwarder on the Win10 VM but when i log into splunk enterprise on the Debian VM and go into Search & Reporting > Data Summary there is no listing of the Win10 VM in either hosts or source list.  Does anyone have any idea what i could be doing wrong or any suggestions of things i could try?
Hi, I am trying to run a search and have tokens setting various search items, what I need is to create a search from an input file and have one field referenced many times for different fields. ... See more...
Hi, I am trying to run a search and have tokens setting various search items, what I need is to create a search from an input file and have one field referenced many times for different fields. My search is:     | inputlookup errorLogs | where RunStartTimeStamp == "2023-01-26-15.47.24.000000" | where HostName == "myhost.com" | where JobName == "runJob1" | where InvocationId == "daily" | fields RunID, ControllingRunID | uniq | format "(" "(" "OR" ")" "||" ")"       This gives:     ( ( ControllingRunID="12345" OR RunID="67890" ) )       What I would like is:     ( ( ControllingRunID="12345" OR RunID="67890" OR RunID="12345" OR ControllingRunID="67890") )       There could be many id pairs of run/controlling ID's and I want to search on any combination if possible.
I want to use the new search signature="test" in the below search. I don't want to add this new signature to the existing lookup.     | tstats summariesonly=true values (IDS_Attacks.action) ... See more...
I want to use the new search signature="test" in the below search. I don't want to add this new signature to the existing lookup.     | tstats summariesonly=true values (IDS_Attacks.action) as action from datamodel=Intrusion_Detection.IDS_Attacks by _time, IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature | `drop_dm_object_name(IDS_Attacks)` | lookup rq_subnet_zones Network as dest OUTPUTNEW Name, Location | lookup rq_subnet_zones Network as src OUTPUTNEW Name, Location | search NOT Name IN ("*Guest*","*Mobile*","*byod*","*visitors*","*phone*") | lookup rq_emergency_signature_iocs_v01 ioc as signature OUTPUTNEW last_seen | where isnotnull(last_seen) | dedup src | head 51