All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am new to Splunk. I was wondering if anyone knew if its possible to query a lookup table that has un-parsed data in it. I am looking for a destination IP address (IoC) in a raw packet, but i... See more...
Hi, I am new to Splunk. I was wondering if anyone knew if its possible to query a lookup table that has un-parsed data in it. I am looking for a destination IP address (IoC) in a raw packet, but it has not been extracted as a field in Splunk. The source IP is parsed, however is the IP for a workstation. Basically I want to build a search to lookup some data that has not been extracted, if possible. Regards,
I am trying to index a file and i dont see why the events are broken. I have tried defining line breaker setting both at indexer and forwarded level as suggested in multiple articles but with no luck... See more...
I am trying to index a file and i dont see why the events are broken. I have tried defining line breaker setting both at indexer and forwarded level as suggested in multiple articles but with no luck. Is there a way that would help me in identifying whats breaking events. Or is there any configuration that would over ride all settings and ensure that events are not broken. Any help would be much appreciated. log file that is being indexed has content like below: 2020-03-10T11:20:27.456+1100: 687196.162: [Event1, 0.0207885 secs] [Parallel Time: 19.8 ms, Workers: 4] [Worker Start (ms): Min: 687196162.2, Avg: 687196162.3, Max: 687196162.3, Diff: 0.1] [Ext Scanning (ms): Min: 0.9, Avg: 1.0, Max: 1.0, Diff: 0.1, Sum: 3.9] [Update RS (ms): Min: 2.4, Avg: 2.4, Max: 2.6, Diff: 0.2, Sum: 9.7] [Processed Buffers: Min: 3, Avg: 10.5, Max: 21, Diff: 18, Sum: 42] [Scan RS (ms): Min: 6.8, Avg: 6.9, Max: 6.9, Diff: 0.1, Sum: 27.6] [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Object Copy (ms): Min: 9.4, Avg: 9.4, Max: 9.5, Diff: 0.1, Sum: 37.7] [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: Min: 1, Avg: 3.2, Max: 6, Diff: 5, Sum: 13] [Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [Worker Total (ms): Min: 19.7, Avg: 19.7, Max: 19.8, Diff: 0.1, Sum: 78.9] [Worker End (ms): Min: 687196182.0, Avg: 687196182.0, Max: 687196182.0, Diff: 0.0] [Code Root Fixup: 0.0 ms] [Code Root Purge: 0.0 ms] [Clear CT: 0.1 ms] [Other: 0.8 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.2 ms] [Ref Enq: 0.0 ms] [Redirty Cards: 0.1 ms] [Humongous Register: 0.1 ms] [Humongous Reclaim: 0.0 ms] [Free CSet: 0.2 ms] [Eden: 44.0M(44.0M)->0.0B(44.0M) Survivors: 7168.0K->7168.0K Heap: 306.0M(1024.0M)->270.0M(1024.0M)] [Times: user=0.08 sys=0.00, real=0.02 secs] 2020-03-10T11:20:38.710+1100: 687207.416: [Event2, 0.0204509 secs] In splunk log file there are some warning messages around - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Mar 11 13:42:59 2020). Expectation: Splunk treats the first field in the format like - "2020-03-10T11:20:38.710+1100: 687207.416: " as date/timestamp and should not try to interpret other numbers as date/time. Around the setup, we have a UF sending logs to Indexer.
what is the difference between Jenkins User API token and Jenkins Authentication token . These are needed in setting up SPlunk jenkins trigger action
Hi, How we can install java agent in sparrow server. Can anyone send me ref./document link or complete steps which i have to follow.
Hi I've two different payloads returned from my search and I need to create a table from values extracted from the payloads, please find the find the payloads below Payload 1 { "appointme... See more...
Hi I've two different payloads returned from my search and I need to create a table from values extracted from the payloads, please find the find the payloads below Payload 1 { "appointment": { "appointmentId": 0, "key": "", "Durations": 60, "JType": "Medium", "StartTime": "0001-01-01T00:00:00Z", "EndTime": "0001-01-01T00:00:00Z", "isDegOnly": false, "softId": 112892 }, "commonID": "REDRF3243", "slotStatusList": [ { "date": "2020-03-24T00:00:00Z", "status": [ { "StartTime": "2020-03-24T06:30:00Z", "EndTime": "2020-03-24T08:30:00Z", "IsAvailable": false, "score": 0 }, { "StartTime": "2020-03-24T08:30:00Z", "EndTime": "2020-03-24T10:30:00Z", "IsAvailable": false, "score": 0 }, { "StartTime": "2020-03-24T10:30:00Z", "EndTime": "2020-03-24T12:30:00Z", "IsAvailable": true, "score": 100 } ], "error": { "message": "" } }, { "date": "2020-03-25T00:00:00Z", "status": [ { "StartTime": "2020-03-25T06:30:00Z", "EndTime": "2020-03-25T08:30:00Z", "IsAvailable": false, "score": 0 }, { "StartTime": "2020-03-25T08:30:00Z", "EndTime": "2020-03-25T10:30:00Z", "IsAvailable": true, "score": 92.44 }, { "StartTime": "2020-03-25T10:30:00Z", "EndTime": "2020-03-25T12:30:00Z", "IsAvailable": false, "score": 0 } ], "error": { "message": "" } } ] } Payload 2 { "Appointment": { "Durations": 60, "TimeSlotStartTime": "0001-01-01T00:00:00+00:00", "TimeSlotEndTime": "0001-01-01T00:00:00+00:00", "IsDefOnly": false, "Job": "Medium", "SoftId": 113291 }, "commonID": "REDRF3243", "GraceMinutes": 0, "BufferMinutes": 0, "RouteRequestList": [ { "Date": "2020-04-02T00:00:00+00:00", "JBID": 11936 }, { "Date": "2020-04-03T00:00:00+00:00", "JBID": 11936 } ] } I need a table created like below where the Jtype, StartTime, EndTime comes from payload1 and JBID comes from Payload2 and the results needs to be merged based on the COMMONID. and there is logic where I need only the first encountered "IsAvailable": true start and end time values Please advise how can I achieve this 1. JBID JType START_TIME END_TIME COMMONID 2. 11936 Medium 2020-03-24T10:30:00 2020-03-24T12:30:00 REDRF3243
Hi there, I need help writing a query that finds the username of whoever ran a command on A Linux server. For example, if you look at the log below: <86>Mar 5 18:41:44 server1 useradd[2569]:... See more...
Hi there, I need help writing a query that finds the username of whoever ran a command on A Linux server. For example, if you look at the log below: <86>Mar 5 18:41:44 server1 useradd[2569]: new user: name=test1, UID=1100, GID=5020, home=/home/test1, shell=/bin/bash Someone with the session ID=2569 added a new user "test1". If I run another query like this: "pid=2569 eventtype=ssh_open", I can see to whom that session belongs. <86>Jan 24 18:34:03 test1 sshd[2569]: pam_unix(sshd:session): session opened for user admin by (uid=0) I was trying to write a query like this, but I keep hitting the wall : |multisearch [search index="linux_secure" eventtype=useradd | stats values(pid) AS pid1] [search index="linux_secure" eventtype=ssh_open | stats values(pid) AS pid2] | where pid1=pid2 The query above is not correct, and it returns errors like subsearch 1 contains a non-streaming command. I want to write something that checks for the identical pid and extracts the username from search2 and the action from search 1. Any help would be appreciated. Thanks, Arsalan
I was sending logs from ECS instance with de splunk log driver. After we start to send logs from HEC and kinessis using the same token we stop to recive logs form ECS cluster. I think that both imple... See more...
I was sending logs from ECS instance with de splunk log driver. After we start to send logs from HEC and kinessis using the same token we stop to recive logs form ECS cluster. I think that both implementations does not are intrusive or yes? Do anyone you know what are happenning?
I wrote below query to get the data and display in my dashboard. And I am getting results with correct data + getting additional data too. Here is the query: index=tap-prod sourcetype=prod jo... See more...
I wrote below query to get the data and display in my dashboard. And I am getting results with correct data + getting additional data too. Here is the query: index=tap-prod sourcetype=prod jobId=e62-71c72ccb3aec diff | rex field=_raw "\"diff\":(?.*)}+" | spath input=message | extract kvdelim=":" pairdelim="," message | table fieldName path expValue actValue Here is the data I am parsing: { "tapName": "tapData", "tapUuid": "22015f427a12", "diff": { "actValue": "tap_actualValue", "address": ".@gmail.com", "diffType": "SAMPLE_DIFFERENCE", "expValue": "tap_expectedValue", "fieldName": "Sample", "fullPath": "/http://www.gmail.com/file", "path": "/send" } } While executing above query I am getting below results which is incorrect: Results (fieldName) ( path ) (expValue) (actValue) (address) address":.@gmail.com" expValue":"tap_expectedValue actValue":"tap_actualValue testName":"someOtherVal `Sample` `/send` `tap_expectedValue` `tap_actualValue`
I am looking for guidance and advise for setting up limits and/or ulimits like settings for a Windows server 2016 installation. I've modified ulimits in a Linux installation(just set unlimited) but i... See more...
I am looking for guidance and advise for setting up limits and/or ulimits like settings for a Windows server 2016 installation. I've modified ulimits in a Linux installation(just set unlimited) but i'm not quite clear if this is a thing in a Windows install. The plan is to pull in a 2-3 eventID from the security WinEvent logs for phase 1 while pushing down .conf files and the Windows_TA app. Future phases will be to increased the WinEvents data-in logs. Q: Do I need to worry about "ulimits" or the similar setting in the Windows environment? For Splunk core and Forwarders? Q: Do I need to modify the ulimits like feature in all of the windows components and forwarders or just the indexers? Q: I'm assuming I will be able to push the limits.conf down to the forwarders if I need to set those limits? Q: I've modified the phone_home to 5 minutes. Should I expect a huge bandwidth spike in phase 1 or phase 2 Q: Any other configurations I should review to make this deployment smoother and/or not crash the gibson? Current Environment: 1xSH(WIN) 2xIndexer(WIN) (distrubuted, load balanced by time, not clustered), 1xMaster(WIN) 1xDeployment(Linux) 1xHeavyForwarder(Linux) I am deploying in two phases. 1st phase is 400-500 Windows forwarders - pulling 2-3 eventIDs 2nd phase is 4000 Windows forwarders - pulling 2-3 eventIDs Thank You, Sean
Howdy Splunkers, After creating a new database input through the DB connect GUI, the query keeps returning the same values, and the checkpoint value is not updating. Log search: index=_inte... See more...
Howdy Splunkers, After creating a new database input through the DB connect GUI, the query keeps returning the same values, and the checkpoint value is not updating. Log search: index=_internal host=[db_connect_host] sourcetype=dbx_server DbInputCheckpointRepository Log messages: 2020-03-09 17:15:00.253 -0400 [QuartzScheduler_Worker-5] INFO c.s.d.s.dbinput.task.DbInputCheckpointRepository - action=load_checkpoint_from_cache checkpoint=Checkpoint{value='9090000', appVersion='3.1.2', columnType=2, timestamp='2020-03-09T17:06:32.535-04:00'} 2020-03-09 17:15:00.254 -0400 [QuartzScheduler_Worker-5] ERROR c.s.d.s.dbinput.task.DbInputCheckpointRepository - action=unable_to_save_checkpoint java.io.FileNotFoundException: D:\Program Files\Splunk\var\lib\splunk\modinputs\server\splunk_app_db_connect\[db_connect_input_name] (Access is denied) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(Unknown Source) at java.io.FileOutputStream.<init>(Unknown Source) at java.io.FileWriter.<init>(Unknown Source) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.dumpCheckpoint(DbInputCheckpointRepository.java:206) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.saveImpl(DbInputCheckpointRepository.java:239) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.saveCheckpoint(DbInputCheckpointRepository.java:115) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.save(DbInputCheckpointRepository.java:102) at com.splunk.dbx.server.dbinput.task.DbInputTask.saveCheckpoint(DbInputTask.java:114) at com.splunk.dbx.server.dbinput.task.processors.HecEventWriter.writeRecords(HecEventWriter.java:68) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203) at org.easybatch.core.job.BatchJob.call(BatchJob.java:79) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2020-03-09 17:15:00.254 -0400 [QuartzScheduler_Worker-5] ERROR org.easybatch.core.job.BatchJob - Unable to write records com.splunk.dbx.server.exception.WriteCheckpointFailException: Error(s) occur when writing checkpoint. at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.saveCheckpoint(DbInputCheckpointRepository.java:121) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.save(DbInputCheckpointRepository.java:102) at com.splunk.dbx.server.dbinput.task.DbInputTask.saveCheckpoint(DbInputTask.java:114) at com.splunk.dbx.server.dbinput.task.processors.HecEventWriter.writeRecords(HecEventWriter.java:68) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203) at org.easybatch.core.job.BatchJob.call(BatchJob.java:79) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Cheers, Jacob
Hi, I am getting this error during runtime when applying the adeum plugin in my module (no instrumentation code in the application): Installation did not succeed. The application could not be inst... See more...
Hi, I am getting this error during runtime when applying the adeum plugin in my module (no instrumentation code in the application): Installation did not succeed. The application could not be installed: INSTALL_FAILED_INVALID_APK The APKs are invalid. I have a top-level build.gradle that resolves the dependency in the buildscript Without the apply plugin adeum line in my module-level build.gradle, I am able to build and run my application successfully
How to find the indexes that the saved searches are running against? Few of my searches are not using index names within saved searches.
I have a summary index that is populated every 5 minutes from a report. The report shows when the last update was for each panel and the current status of each panel. The status will change from no... See more...
I have a summary index that is populated every 5 minutes from a report. The report shows when the last update was for each panel and the current status of each panel. The status will change from normal to degraded if the last update was more that 480 minutes ago. The events in the summary index look like this ( not including the fields that are added by the summary index process) 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-1, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-2, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-3, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-4, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-5, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-6, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-7, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-8, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-9, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-10, Status=normal 03/09/2020 18:30:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-11, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-1, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-2, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-3, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-4, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-5, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-6, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-7, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-8, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-9, Status=degraded 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-10, Status=normal 03/09/2020 18:25:00 +0000, Last_Received="2020-03-09 18:10:20 GMT", Minutes_Ago=20, Panel="Panel-11, Status=normal I am having some trouble with the final piece of the dynamic alert I need to write an alert that checks every 5 mins and sends an email when there is a change in "Status" within the last 10 minutes. The logic is to generate a "system degraded" alert if any of the panels have a status change from "normal" to "degraded" and a "system returned to normal" alert when the degraded panels have returned to normal AND all panels have a current status of "normal". The basic alert search I have working currently will fire when there is a change in status but only for an individual panel. I have tried playing around a bit with eventstats and streamstats to get a count but have not been successful. The hard part here is not simply just scheduling an alert that checks the status, I only want an alert when the status has changed. For instance, when a panel initially changes from normal to degraded I want an alert, but I dont want an alert saying it is degraded every 5 minutes until it returns to normal. I only want the initial degraded alert and then an alert when the panel has returned to normal which may be several hours etc. Here is the current alert search I am working with for testing looks like this: index=my_alerts earliest=-10m latest=now() | stats latest(Status) as Latest_Status, earliest(Status) as Previous_Status by Panel | eval status_change=case(Previous_Status != Latest_Status, "Y", Previous_Status = Latest_Status, "N") | eval system_status=case(status_change="Y" AND Latest_Status="degraded"), "System is degraded", (status_change="Y" AND Latest_Status="normal"), "System has returned to normal", status_change="N", "System status has not changed") | where status_change = "Y" The email alert message I created simply lists the system_status field using $result.system_status$ - So when it fires it says "System is degraded" or "System has returned to normal". This is working but again it is only by Panel. So, if I have 2 degraded panels and 1 returns to normal, the alert should not fire because the current status for all 11 panels is not normal... I am sure I am probably overthinking this, or I am missing something really basic... Any assistance would be appreciated.. Thanks!
I have a multiline event with two identical keys that I need to mask values for, as shown below. I am NOT especially handy with regex but have manged to get one of the values, the second, masked. Has... See more...
I have a multiline event with two identical keys that I need to mask values for, as shown below. I am NOT especially handy with regex but have manged to get one of the values, the second, masked. Has anyone out there had success masking a multiline event with multiple values like this? Thanks in advance. 2020-02-16 17:00:11,374 [INFO ] pool-1-thread-152 ServiceIdentity - null|null : OrderNumber: 654321 Ids: 12345678 23456789 34567890 Response: False manualCapture: False PostResponses: specialId: 1234567 <===(1) relationship: null nopp: 2 open: False specialId: 7654321 <===(2) relationship: null nopp: 2 open: False
I am trying to do this logic. Each "IF" I can do separately no issue. However, I am not sure how to combine these two searches together as the second search is based off the output of the first. ... See more...
I am trying to do this logic. Each "IF" I can do separately no issue. However, I am not sure how to combine these two searches together as the second search is based off the output of the first. IF we see more than 10 failed events (1201 OR 1203) THEN IF we see more than 2 different users print ForwardedIP Pipe IF we see successful events (1200 OR 1202) Print Usernames The searches basically consist of index="auth" EventCode=1201 OR EventCode=1203 | rex "(?[^<]+)" | rex "(?[^<]+)" | stats values(UserId) as UserId by ForwardedIpAddress And then EventCode 1200 and 1202 for successful auth. If we see bad auths with multiple users from the same IP, and then we see a successful auth we want to know about it.
I am trying to search List the top 10 TCP ports accessed by unique IPs
I have upgraded Splunk Enterprise to 7.2.4.2 as well as the forwarder. However, the Splunk Information Disclosure Vulnerability remains an issue. I can reach this URL unathenticated (https://<>:8000/... See more...
I have upgraded Splunk Enterprise to 7.2.4.2 as well as the forwarder. However, the Splunk Information Disclosure Vulnerability remains an issue. I can reach this URL unathenticated (https://<>:8000/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json) and receive the disclosed server info. The upgrade should've resolved it per the Splunk doc. (Nessus Plug-in 121164)
Hi, We have custom app which was using internal set_logging module which was working till date without issue but we updated ITSI version 4.4.2 then python throws below error. 03-09-2020 17:11:... See more...
Hi, We have custom app which was using internal set_logging module which was working till date without issue but we updated ITSI version 4.4.2 then python throws below error. 03-09-2020 17:11:11.351 ERROR sendmodalert - action=rebus_remedy_incident_create STDERR - ImportError: No module named builtins 03-09-2020 17:11:11.351 ERROR sendmodalert - action=rebus_remedy_incident_create STDERR - from builtins import object 03-09-2020 17:11:11.351 ERROR sendmodalert - action=rebus_remedy_incident_create STDERR - File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/setup_logging.py", line 7, in 03-09-2020 17:11:11.351 ERROR sendmodalert - action=rebus_remedy_incident_create STDERR - from ITOA.setup_logging import setup_logging could you please anyone help ? do i need to manually place this module inside sit-packages ? Thanks Sathish
Hello Team, Could you please help me upload this data in Splunk as I am passing into upload as JSON its unable to parse it. 2020-03-09T14:30:39+00:00 172.20.0.20 2020-03-09T14:28:13Z cos-dark... See more...
Hello Team, Could you please help me upload this data in Splunk as I am passing into upload as JSON its unable to parse it. 2020-03-09T14:30:39+00:00 172.20.0.20 2020-03-09T14:28:13Z cos-darktrace.polarisalpha.com darktrace - - - {"pbid":313384,"pid":702,"time":1583764093000,"timestamp":"2020-03-09 14:28:13","creationTime":1583764287000,"creationTimestamp":"2020-03-09 14:31:27","name":"SaaS::Multiple SaaS Resource Deletions","components":[8352],"didRestrictions":[],"didExclusions":[],"throttle":3600,"sharedEndpoints":false,"interval":0,"sequenced":false,"active":true,"retired":false,"state":"New","commentCount":0,"triggeredComponents":[{"time":1583764092000,"timestamp":"2020-03-09 14:28:12","cbid":563836,"cid":8352,"chid":12738,"size":6,"threshold":5,"interval":1800,"logic":{"data":{"left":{"left":"A","operator":"AND","right":{"left":"C","operator":"AND","right":"D"}},"operator":"OR","right":{"left":"B","operator":"AND","right":{"left":"C","operator":"AND","right":"D"}}},"version":"v0.1"},"metric":{"mlid":306,"name":"saasrurcedeleted","label":"SaaS Resource Deleted"},"device":{"did":15727,"sid":-9,"hostname":"SaaS::Office365: cook@polar.com","firstSeen":1538579956000,"lastSeen":1583764287000,"typename":"saasprovider","typelabel":"SaaS Provider"},"triggeredFilters":[{"cfid":68892,"id":"A","filterType":"Unusual SaaS usage","comparatorType":">","arguments":{"value":40},"triggeringValue":"41"},{"cfid":68894,"id":"C","filterType":"Direction","comparatorType":"is","arguments":{"value":"in"},"triggeringValue":"in"},{"cfid":68895,"id":"D","filterType":"Message","comparatorType":"does not contain","arguments":{"value":"event=DeleteEvent"},"triggeringValue":"event=MoveToDeletedItems,user=cook@polar.com"},{"cfid":68896,"id":"d1","filterType":"Source IP","comparatorType":"display","arguments":{},"triggeringValue":"65.205.175.4"},{"cfid":68897,"id":"d2","filterType":"Message","comparatorType":"display","arguments":{},"triggeringValue":"event=MoveToDeletedItems,user=michael.cook@polarisalpha.com"},{"cfid":68898,"id":"d3","filterType":"Unusual SaaS usage","comparatorType":"display","arguments":{},"triggeringValue":"41"},{"cfid":68899,"id":"d4","filterType":"Event details","comparatorType":"display","arguments":{},"triggeringValue":""}]}]}
Hello! I'm using version 7.3.4 of Splunk Light, rpm install on RHEL 8, forwarder same version and I'm getting this error in the forwarder log:"while idling: error:1408F10B:SSL routines:SSL3_GET_RE... See more...
Hello! I'm using version 7.3.4 of Splunk Light, rpm install on RHEL 8, forwarder same version and I'm getting this error in the forwarder log:"while idling: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number", what is causing that? Splunk console recognizes the host the forwarder is located, but unable to exchange data. Thanks, Dave