All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since I cannot find much on querying ASUS router syslogs, and I am completely new to Splunk, I thought I'd start a thread for other Google Travelers in the far future. I installed Splunk ENT yesterd... See more...
Since I cannot find much on querying ASUS router syslogs, and I am completely new to Splunk, I thought I'd start a thread for other Google Travelers in the far future. I installed Splunk ENT yesterday and I am successfully sending syslogs. In my first self-challenge, I'm trying to build a query with just dropped packets for external IP sources, but its not working. source="udp:514" index="syslog" sourcetype="syslog" | where !(cidrmatch("10.0.0.0/8", src) OR cidrmatch("192.168.0.0/16", src) OR cidrmatch("172.16.0.0/12", src)) The Raw data is below - I wanna filter out all 192 privates and just external addresses, like that darn external HP src IP (15.73.182.64). Feb 4 08:46:36 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43798 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D135F84C3294ECB) MARK=0x8000000 Feb 4 08:46:37 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43799 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136188C3294ECB) MARK=0x8000000 Feb 4 08:46:38 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43800 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136590C3294ECB) MARK=0x8000000 Feb 4 08:46:40 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43801 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136DA0C3294ECB) MARK=0x8000000 Feb 4 08:46:44 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43802 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D137DC0C3294ECB) MARK=0x8000000 Feb 4 08:46:52 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43803 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D139E00C3294ECB) MARK=0x8000000 Feb 4 08:47:09 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43804 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D13DE80C3294ECB) MARK=0x8000000 Feb 4 08:47:17 kernel: DROP IN=eth4 OUT= MAC=ff:ff:ff:ff:ff:ff:28:11:a8:58:a6:ab:08:00 src=192.168.1.109 DST=192.168.1.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=41571 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x8000000 Next question - would anyone be able to write an app that takes the external IPs and does a lookup against the AbusePDB API or other blacklist APIs?  
Ok. Then what does the splunk btool server list license say? Also splunk list licenses
H, @meshorer, Yes, rest/health endpoint shows db_data and status off all services. https://docs.splunk.com/Documentation/SOARonprem/6.2.0/PlatformAPI/RESTInfo#.2Frest.2Fhealth  But maybe it is bet... See more...
H, @meshorer, Yes, rest/health endpoint shows db_data and status off all services. https://docs.splunk.com/Documentation/SOARonprem/6.2.0/PlatformAPI/RESTInfo#.2Frest.2Fhealth  But maybe it is better to monitor the system using an external script or something else. This will alert you even system stops.
I would like to use the test.js to do this , try to ask for ChatGPT but can't done my job...
Dear Splunkers ,  May I ask for help please~ I have a dashboard like below , I need someone give me some suggestion , to add a button on action fields when button clicked, then change the status... See more...
Dear Splunkers ,  May I ask for help please~ I have a dashboard like below , I need someone give me some suggestion , to add a button on action fields when button clicked, then change the status filed content to "Ack" thank u all ,  <dashboard version="1.1" theme="dark" script="test.js"> <label>111</label> <row> <panel> <table> <search> <query>|makeresults count=5 | eval A=random(), B=random(), status="", action="Ack/UnAck"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>
Has httpsCode been extracted OK? Please share some sample event, anonymised of course.
As you are talking about windows, it might be more complicated than that. By default TA_windows contains tranforms which extract the host field from the event itself so even if you set it to somethi... See more...
As you are talking about windows, it might be more complicated than that. By default TA_windows contains tranforms which extract the host field from the event itself so even if you set it to something in the UF's configuration, it will be overwritten by the value of ComputerName of Computer field from the event. (and that makes sense because often windows event are not generated on the host they are being ingested from - WEF is a commonly used mechanism to forward events within a windows environment to a single collector node from which it is pulled by UF).
It's not a question about the Splunk Connector for Kafka as such. It's more a question about how to manage your kubernetes cluster and kafka containers there. And these are questions which are defini... See more...
It's not a question about the Splunk Connector for Kafka as such. It's more a question about how to manage your kubernetes cluster and kafka containers there. And these are questions which are definitely out of scope of this forum.
The proper order for the pem file is Subject's certificate Subject's private key Issuing CA certificate chain (unless you explicitly trust the issuer of the subject's certificate). The location... See more...
The proper order for the pem file is Subject's certificate Subject's private key Issuing CA certificate chain (unless you explicitly trust the issuer of the subject's certificate). The location of the file is tricky because the settings can be either inherited from the default server-wide settings which you set up in server.conf - https://docs.splunk.com/Documentation/Splunk/latest/admin/serverconf#SSL.2FTLS_Configuration_details or can be overriden at the specific input level. As a side note - certificates for web interface are configured differently.
Remember that docker containers are volatile (except for the non-volatile space you "attach" to them) and docker images are "as is" after build so you'd have to either create a new image based on the... See more...
Remember that docker containers are volatile (except for the non-volatile space you "attach" to them) and docker images are "as is" after build so you'd have to either create a new image based on the ready-made splunk docker image or use the modify dockerfile to build a custom docker image from scratch. Also, the whole idea of running Splunk in a docker environment is that you do an upgrade by pulling a newer version of the whole image so you'd need to customize your image each time a new version is released.  
when I go to search head to change configuration of TA_vectra_detect_json I find this (You do not have permissions to edit this configuration.)   
try this booleans operators must be used in UPPERCASE, in addition the AND operator is mandatory only in eval. This means that you're searching using as additional conditions: action="blocked" and ... See more...
try this booleans operators must be used in UPPERCASE, in addition the AND operator is mandatory only in eval. This means that you're searching using as additional conditions: action="blocked" and the word "and". Ciao.
i have around 25  events with  httpsCode = 200 OK but when use the above function it returns 0 in the success column  
We don't have any "unofficial" release dates. And even if we had we probably couldn't share them with you. You need to check the download page to see when it becomes available.
| stats avg(timetaken) count(eval(httpsCode == 200)) as success count(eval(httpsCode != 200)) as failure
hi @scelikok , understood, thank you. I would love to hear from you how you recommend to monitor system health. I thought of rest call "health" in a playbook to run every few minuets. if you have... See more...
hi @scelikok , understood, thank you. I would love to hear from you how you recommend to monitor system health. I thought of rest call "health" in a playbook to run every few minuets. if you have another idea, please do tell  
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 48... See more...
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 484 etc.... the case is like httpscode is 200 it should be treated as success count and others should be treated as failure count.... finally the statistics table should show values of success,failure and average response time....
Hi @meshorer, Since your events could not be written to db, service would stop. That is why you should monitor system health.
hi @scelikok    so what happens when I reach the size limit in my server?
Any update from the Splunk on the issue? Do we have to upgrade to 9.1.2 to view the monitoring console? or do we have a work around? please advice. Thanks!