All Topics

Top

All Topics

I have a log file which should be updated in every min, but some time service hung, so the file cannot get update. Need to send an alert: if the file wasn't updated more than 5mins. But tried with t... See more...
I have a log file which should be updated in every min, but some time service hung, so the file cannot get update. Need to send an alert: if the file wasn't updated more than 5mins. But tried with the following different ways, the results seems not accurate. Any idea? Thanks. I did: index=abcapp source=abc.log" | sort _time |streamstats window=2 range(_time) as timediff |table timediff _time |eval alert=if(timediff>=5,1,0) |where alert=1 OR index=abcapp source="abc.log" | sort _time | delta _time as timediff | eval alert = if(timediff>5,1,0) |where alert =1 OR index=abcapp source="abc.log" earliest=-5m latest=now |stats count as num | eval alert = if(num=0,1,0) |where alert =1
Hello! I just set up Splunk Enterprise on-prem this morning and I was able to connect our Cisco Meraki firewall to Splunk using the UDP Data input (syslog). I am able to see all the traffic inbound... See more...
Hello! I just set up Splunk Enterprise on-prem this morning and I was able to connect our Cisco Meraki firewall to Splunk using the UDP Data input (syslog). I am able to see all the traffic inbound/outbound and I would like to create an alert so that when a network port scan (like NMAP) is conducted internally I will be notified. I'm a little unsure what I need to input in the search box query?  For instance, I tried this in search: index=* sourcetype="syslog" src=172.30.0.0/21 dst=172.30.0.0/21 dport=22. It's pulling in some data from my test NMAP scan but it's only showing up as one scan from my kali Linux IP and the destination is the subnet gateway, it's not creating an entry for every host that was scanned. Any suggestions on what I need to do?
This might be impossible, but thought I would at least ask the question before giving up! I have created an add-on that uses python scripts to pull in and index/checkpoint data. I am already successf... See more...
This might be impossible, but thought I would at least ask the question before giving up! I have created an add-on that uses python scripts to pull in and index/checkpoint data. I am already successfully pulling from several of these data input sources. The issue I am running into is that the next API I want to pull data from requires an identifier at the end (a UUID in this case).  I dont know this uuid until I do a search on one of the forementioned inputs.  So my question is, is there a way to take a value (variable/token/etc...) and dynamically create another data input from it? Can you define a $var$ in your script that can be dynamically passed in, in order to index data from that endpoint?
Hi Everyone, Had a question and apologies in advanced if the topic has already been brought up. We are currently utilizing the Microsoft Office 365 Reporting Mail Add-on for Splunk to ingest messag... See more...
Hi Everyone, Had a question and apologies in advanced if the topic has already been brought up. We are currently utilizing the Microsoft Office 365 Reporting Mail Add-on for Splunk to ingest message trace logs, but just recently we've been running into consistent 401 unauthorized errors. We've double checked and triple checked that the account used to query the API is not locked and we are able to get results when we manually call the URI: Invoke-RestMethod -Method GET -Uri "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?" -Credential $cred Has anyone run into this issue? If so, would be very appreciated if there would be any feedback as to how it was resolved (or at least a pathway to remediation). Thank you again
| eval hours= if (day="Monday", hours=(a+b), hours) So basically if day=monday, i wants hours to add up a+b
Ciao  Could you please help me split this string: DF - R - Emails with words (O365) sent to Personal  Account (scored) (Data Exfil)  I need the DF - R - Emails with words  (O365) sent to perso... See more...
Ciao  Could you please help me split this string: DF - R - Emails with words (O365) sent to Personal  Account (scored) (Data Exfil)  I need the DF - R - Emails with words  (O365) sent to personal account (scored) (data exfil) all to be split into their own fields
First time splunker here. Can you have an inputs.conf with only: [default] host = <fqdn> In etc/system/local while having custom apps with inputs in etc/apps? Or will the etc/system/local/inp... See more...
First time splunker here. Can you have an inputs.conf with only: [default] host = <fqdn> In etc/system/local while having custom apps with inputs in etc/apps? Or will the etc/system/local/inputs.conf mess with the custom app inputs?
Basically my data is formatted as a message and then info in parentheses on the right. Example: " LL - VPN Activity (list) (monitor)" " LL - Assess (iCloud) Storage (not conf) (approve)"  etc. Wh... See more...
Basically my data is formatted as a message and then info in parentheses on the right. Example: " LL - VPN Activity (list) (monitor)" " LL - Assess (iCloud) Storage (not conf) (approve)"  etc. What I want to do is for every message, get rid of the contextual info on the right hand side. So in these cases, it would become:  "LL - VPN Activity" "LL - Assess (iCloud) Storage" Any ideas?
How can I remove alerts in the messages tab in Splunk Web via curl? Users are reading them and panicking far to often.
I have a collection of log files that I am trying to parse.  Quick summary: From Apache/Tomcat using logback I don't have permission to change the layout of the log files at the moment but can wo... See more...
I have a collection of log files that I am trying to parse.  Quick summary: From Apache/Tomcat using logback I don't have permission to change the layout of the log files at the moment but can work on that Event field from the quick parse seems to hold a lot of data that could be separated out into "Fields" via comma separation.   I seem to need the 3rd item in Event if we could parse them by comma Is there an easy way to do that in the query syntax? The Event section looks like: date,  something about a filter, THE URL THAT I WANT, other junk etc 2022-05-01 23:15:24,  calling SSO: null, /topUrl/rest/services/folder/servicename/command?moreinfoEtc, referer: Null, request ip: 10.xxx.xxx.xxx So, I'd love to get statistics on  topURL ServiceName Any help would be great.  
The test_new.html is getting update every 4 hours.The html file may or maynot have same number of lines. The data is only coming immediately when I am adding say test data into the html file. That ... See more...
The test_new.html is getting update every 4 hours.The html file may or maynot have same number of lines. The data is only coming immediately when I am adding say test data into the html file. That means the data flow is not an issue.   I am expecting it to send me data as and when timestamp of the file changes.  Need your suggestions on the same. I have done below configuration In UF agent I have  added inputs.conf as [monitor:///root/splunkstorage/test_new.html] disabled = false index = test_normal sourcetype = test:global:testnew:html crcSalt = <SOURCE> In the indexer I have props.conf  [test:global:testnew:html] DATETIME_CONFIG = CURRENT CHECK_METHOD = modtime LINE_BREAKER = (<html><body>) NO_BINARY_CHECK = true SEDCMD-addvalues = s/<head>/<html><body>\n<head>/g TRUNCATE = 0 category = Custom disabled = false pulldown_type = true
Hi, I am a newbie in Splunk. I have to write a splunk query to get the status_code count for error(status range 300 and above) and success(status range 200-299) by host. This is my current search(2... See more...
Hi, I am a newbie in Splunk. I have to write a splunk query to get the status_code count for error(status range 300 and above) and success(status range 200-299) by host. This is my current search(24 hrs) but unfortunately return 0 result except for host list displayed index=xxxx  host=*  status=* | stats count(status>=300) as "Error", count(status<299) as "OK" by host Expected result: Host          |          Error          | OK ---------------------------------------- xxxx           |          23              |  1
Hi,  I have several model id: 12310, 12320, 12330. If the suffixes = "10", "20", "30", I define the typemachine accordingly. type typemachine 10 car 20 moto 30 bic... See more...
Hi,  I have several model id: 12310, 12320, 12330. If the suffixes = "10", "20", "30", I define the typemachine accordingly. type typemachine 10 car 20 moto 30 bicycle   | eval typemachine=case(type="10", "car", type="20", "moto ", type="30", "bicycle", 1=1, "autre") However I want to add the exception, if id=56410 or 65210, it must be the "moto". Can I do it, please?  Thanks
Hello all. I am making a dashboard in which I was in the need to create a  subsearch.  This is the piece of code that does it:   <panel> <title>random panel title</title> <table depends="$sho... See more...
Hello all. I am making a dashboard in which I was in the need to create a  subsearch.  This is the piece of code that does it:   <panel> <title>random panel title</title> <table depends="$show_debug$"> <search id="Build_list"> <query>index= here it goes my query |fields * |table important_field |format</query> <finalized> <condition match=" 'job.resultCount' != 0"> <set token="my_list">$result.search$</set> </condition> <condition> <unset token="my_list"></unset> </condition> </finalized> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <title>another panel</title> <html depends="$show_debug$"> <h3>$my_list$</h3> </html> </panel>   and here I am using the $my_list$ token:   <search> <query>$my_list$ | foreach something.* [rename "&lt;&lt;FIELD&gt;&gt;" as "&lt;&lt;MATCHSEG1&gt;&gt;"] | stats values(Url),values(UrlDomain)</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search>     This worked well the first time,  but now,  for every new query I do, no matter if I close and open a new browser/splunk session,  I see still the results of the first query I did.  is like  $my_list$ has the first ever values hardened and I cannot reset them.  I though that <unset token="my_list"></unset> would clear it but not....   Any help please?  The goal here is to use this $my_list$ token, which is a splunk query  (note the |format at the end of the query)  but of course this token needs to be empty every time I run a new query.   Thanks a lot in advance. Mario  
Hello, I have data coming in near real-time to a host (Linux)  where UF installed on it. It's a new push, objective is to send these events to SPLUNK indexer to view them from search head. Everythi... See more...
Hello, I have data coming in near real-time to a host (Linux)  where UF installed on it. It's a new push, objective is to send these events to SPLUNK indexer to view them from search head. Everything on place except I need to put new props.conf, inputs.conf, and transforms.conf files into that server. My question is where and how should I put those configuration files. Create a new folder local under etc/apps/ folder from CLI and copy all these 3 configuration files Or copy all these configuration files into ......etc/system/local folder.....or ....? Any recommendations will be highly appreciated. Thank you so much. 
Hi Team, Indexer is going down very frequently due to too many open files  currently ulimit value for open files on the server is 128000 are we good to increase the Ulimit value more than 20000... See more...
Hi Team, Indexer is going down very frequently due to too many open files  currently ulimit value for open files on the server is 128000 are we good to increase the Ulimit value more than 200000, if we increase the ulimit value the open file issue will resolve? if we increase any other impact is there on the indexer. Kindly assist me on this issue. Thanks
Hi all, I've been working on this query for the last few days and still can't seem to crack it. (Appreciate the person from this forum that helped me get this far!) I'm trying to create 2 groups:... See more...
Hi all, I've been working on this query for the last few days and still can't seem to crack it. (Appreciate the person from this forum that helped me get this far!) I'm trying to create 2 groups: (1) Engaged, and (2) Not Engaged. The events are grouped by sessionId, which is a single conversation with a chatbot. A response from the bot or customer is an intent. So I'm trying to count sessionIds with 5 or more intents as Engaged. Here's the idea, but obviously this query won't work since the bottom two eval commands are invalid.      index=conversation botId=ccbd | eval intent_count=if(like(intent,"%"), "1", "0") | stats sum(intent_count) as intent_count by sessionId | eval engaged=(where intent_count => 5) | eval not_engaged(where intent_count <5)     Then, create a table that would look like this:     HEADER TOTALS Engaged 100 Not_engaged 100      
Hi all, I have been working with Splunk SOAR Community Edition for some time. Now I am wondering how the, is it called Enterprise Version?!, differs from the Community Version. Unfortunately I ha... See more...
Hi all, I have been working with Splunk SOAR Community Edition for some time. Now I am wondering how the, is it called Enterprise Version?!, differs from the Community Version. Unfortunately I have not found any documentation on this. Could someone provide me with some information? Thanks a lot simon  
Why is dashboard "Schedule PDF Delivery" receiving wrong results in Splunk Enterprise Version: 8.2.4 ?For example,  if the dashboard contains field called students_number it's value 2344, In Schedule... See more...
Why is dashboard "Schedule PDF Delivery" receiving wrong results in Splunk Enterprise Version: 8.2.4 ?For example,  if the dashboard contains field called students_number it's value 2344, In Schedule PDF Delivery the value is 15782943.
I have created trial account but can't create app and couldn't login with account and user. Also can't reset the password.