All Topics

Top

All Topics

Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration  and i have reviewed intrusction under details tab from SentinelO... See more...
Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration  and i have reviewed intrusction under details tab from SentinelOne App .   I can not see an explict mention to a indexer-cluster setup. What are the steps to setup input configuration for a indexer cluster avoiding data duplication? Thanks for your help
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactl... See more...
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactly the same? I would like date, time and host included in the event. I appreciate any hints. thanks and regards, pawelF
How to convert splunk event to stix 2.1 json because i think to  connection to a soc center now i use splunk enterprise how can i do ? any app can convert?
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles... See more...
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles are created every second hour and log lines are being appended to the newest file. Last night the universal forwarder stopped working normally. When a new file was created the forwarder sent the first line to Splunk. New lines appended later on are not being forwarded. There are no errors logged in the splunkd.log file on the forwarder, nor any error messages on the receiving index servers. Every time a new file is generated, the forwarder sends the first line to Splunk, but the appending lines seem to be ignored. As far as I can see, there has not been any changes on the forwarder, nor on the Splunk servers that might cause this defect. Is there any way to debug the parsing of the logfile on the forwarder to identify the issue? Any other ideas what can be the issue here? Thanks.
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hos... See more...
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hostname as in "uname -n". It will automatically correct the host value, save the file and then restart splunk service automatically? cat /opt/splunk/etc/system/local/inputs.conf [default] host=iorper-spf52
Hi at all, I have to parse Juniper Switch logs that are very similar to Cisco ios. In the Juniper Add-On there isn't anythig for parse these logs so I have to create a new Add-On. is there anythig... See more...
Hi at all, I have to parse Juniper Switch logs that are very similar to Cisco ios. In the Juniper Add-On there isn't anythig for parse these logs so I have to create a new Add-On. is there anythig that already did it and can give me some hint to avoid to create hot water? Ciao. Giuseppe
Hi, Running Splunk 9.0.7 and addon Splunk_TA_MS_Security version 2.1.1. I followed the instructions from the addon https://docs.splunk.com/Documentation/AddOns/released/MSSecurity/Configure and re... See more...
Hi, Running Splunk 9.0.7 and addon Splunk_TA_MS_Security version 2.1.1. I followed the instructions from the addon https://docs.splunk.com/Documentation/AddOns/released/MSSecurity/Configure and reviewed from Microsoft article  https://learn.microsoft.com/en-us/microsoft-365/security/defender/api-hello-world?view=o365-worldwide Basically I created an App Registration in our Azure tenant, add the following permissions and created a secret   with all this, I followed the Microsot article and run the powershell scripts to test the connection and the token I obtain only gets a single permission.     could someone tell me what I am doing wrong? I expected to get all the permissions assigned to the application and I think that is why I get the 403 error in the splunkd.log. 12-17-2023 13:14:32.037 +0100 ERROR ExecProcessor [19404 ExecProcessor] - message from ""C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_MS_Security\bin\microsoft_defender_endpoint_atp_alerts.py"" 403 Client Error: Forbidden for url: https://api-eu.securitycenter.microsoft.com/api/alerts?$expand=evidence&$filter=lastUpdateTime+gt+2023-11-17T12:14:31Z 12-17-2023 13:17:38.251 +0100 ERROR ExecProcessor [19404 ExecProcessor] - message from ""C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_MS_Security\bin\microsoft_365_defender_endpoint_incidents.py"" 403 Client Error: Forbidden for url: https://api.security.microsoft.com/api/incidents?$filter=lastUpdateTime+gt+2023-11-17T12:17:31Z   thanks  
Hi all,  I have this query: | timechart span=1s count AS TPS | eventstats max(TPS) as MaxPeakTPS | stats avg(TPS) as avgTPS first(peakTPS) as peakTPS first(peakTime) as peakTime | fieldforma... See more...
Hi all,  I have this query: | timechart span=1s count AS TPS | eventstats max(TPS) as MaxPeakTPS | stats avg(TPS) as avgTPS first(peakTPS) as peakTPS first(peakTime) as peakTime | fieldformat peakTime=strftime(peakTime,"%x %X") This currently outputs Max TPS when Max TPS took place as well as the AVG TPS. I was wondering if it's possible to also display Min TPS and when that took place?  TIA
This is my source code </search>         <option name="charting.chart">column</option>         <option name="charting.drilldown">none</option>         <option name="refresh.display">progressbar</... See more...
This is my source code </search>         <option name="charting.chart">column</option>         <option name="charting.drilldown">none</option>         <option name="refresh.display">progressbar</option>       </chart>
Hello Splunkers !! I am getting below while executing the search . Please let me know why this error occurs and help me fix the issue.  
We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audi... See more...
We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audit logs or Event to determine Users activity from Splunk cloud platform
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. T... See more...
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. Thank you, Mahendra Shetty
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is some... See more...
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is something like below should raise an alert. 12-18-23 00:00 ---> is "0" and also it should is display if there is "0" events in last 8hrs. as i am getting nothing, if no events during that time. Thank you,  
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critic... See more...
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critical,high,medium,low,informational) and in additional can we have unique color for the bar for each x-axis label (ex:critical:red, high:green). Can someone guide me on how to implement these changes. Appreciate your help in advance!!   Query: `notable` | stats count by urgency
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(... See more...
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(Country) by user So if a user logins from one Country, then a get a single record for the user (user, Country).  If a user logins in from multiple locations, I get the user name in one column and a list of the source locations in the values(County) column. I would like to construct the search so that only see those users who have logins from multiple Countries. Thanks
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we h... See more...
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we have applications A, B and Application A has a field users with values test, test2, test3. and Application B has a field users with values test4, test5, test6, If Application A has any security breach events it must send an email to users. Regards, Sai
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-... See more...
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q") | bin span=1d time | stats sum(SumTotalErrors) as sumErrors by time | eval readable_time = strftime(time, "%Y-%m-%d %H:%M:%S") | stats avg(sumErrors) now, i want: 1. add generic loop to calculate avg for span of 1m,2m,3m,5n,1h,... and present all in a table. i tried to replace 1d by parameter but i haven't succeed yet. 2. give option to user to insert his desired span in dashboard and calculate the avg errors for him. how can i do that? Thanks , Maayan
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this da... See more...
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this data? Do we need to enforce Splunk ES to use this data and alert us in case a match happens or Splunk ES uses this data without our interaction? are we required to create custom correlation rules and configure the adaptive response action or what?
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <ta... See more...
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>Filesystem</th><th>Type</th><th>Blocks</th><th>Used %</th><th>Available %</th><th>Usage %</th><th>Mounted on</th></tr> <tr><td> /dev/root </td><td> ext3 </td><td> 5782664 </td><td> 1807636 </td><td> 3674620 </td><td bgcolor=red> 33% </td><td> / </td></tr> <tr><td> devtmpfs </td><td> devtmpfs </td><td> 15872628 </td><td> 0 </td><td> 15872628 </td><td bgcolor=white> 0% </td><td> /dev </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 10580284 </td><td> 5298356 </td><td bgcolor=red> 67% </td><td> /dev/shm </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 26984 </td><td> 15851656 </td><td bgcolor=white> 1% </td><td> /run </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 0 </td><td> 15878640 </td><td bgcolor=white> 0% </td><td> /sys/fs/cgroup </td></tr> <tr><td> /dev/md1 </td><td> ext3 </td><td> 96922 </td><td> 36667 </td><td> 55039 </td><td bgcolor=red> 40% </td><td> /boot </td></tr> <tr><td> /dev/md6 </td><td> ext3 </td><td> 62980468 </td><td> 28501072 </td><td> 31278452 </td><td bgcolor=red> 48% </td><td> /usr/sw </td></tr> <tr><td> /dev/mapper/cp1 </td><td> ext4 </td><td> 1126568640 </td><td> 269553048 </td><td> 800534468 </td><td bgcolor=white> 26% </td><td> /usr/p1 </td></tr> <tr><td> /dev/mapper/cp2 </td><td> ext4 </td><td> 1126568640 </td><td> 85476940 </td><td> 984610576 </td><td bgcolor=white> 8% </td><td> /usr/p2 </td></tr> </table></body></html>   I used below query to get the table:   ... | rex field=_raw "Disk\sUtilization\sfor\s(?P<Server>[^\s]+)\sin\s(?P<Region>[^\s]+)\s(?P<Environment>[^\s]+)\s\-(?P<Server_IP>[^\s]+)\s\<" | rex field=_raw max_match=0 "\<tr\>\<td\>\s(?P<Filesystem>[^\s]+)\s\<\/td\>\<td\>\s(?P<Type>[^\s]+)\s\<\/td\>\<td\>\s(?P<Blocks>[^\s]+)\s\<\/td\>\<td\>\s(?P<Used>[^\s]+)\s\<\/td\>\<td\>\s(?P<Available>[^\s]+)\s\<\/td\>\<td\sbgcolor\=\w+\>\s(?P<Usage>[^\%]+)\%\s\<\/td\>\<td\>\s(?P<Mounted_On>[^\s]+)\s\<\/td\>\<\/tr\>" | table Server,Region,Environment,Server_IP,Filesystem,Type,Blocks,Used,Available,Usage,Mounted_On | dedup Server,Region,Environment,Server_IP   And below is the table  I am getting: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root devtmpfs tmpfs tmpfs tmpfs /dev/md1 /dev/md6 /dev/mapper/p1 /dev/mapper/p2 ext3 devtmpfs tmpfs tmpfs tmpfs ext3 ext3 ext4 ext4 5782664 15872628 15878640 15878640 15878640 96922 62980468 1126568640 1126568640 1807636 0 10580284 26984 0 36667 28501072 269553048 85476940 3674620 15872628 5298356 15851656 15878640 55039 31278452 800534468 984610576 33 0 67 1 0 40 48 26 8 / /dev /dev/shm /run /sys/fs/cgroup /boot /usr/sw /usr/p1 /usr/p2 Here, the fields Filesystem,Type,Blocks,Used,Available,Usage_Percent and Mounted_On are coming up in one row. I want the table to be separated according to the rows like below: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root ext3 5782664 1807636 3674620 33 / gcgnamslap Asia Testing 10.100.158.51 devtmpfs devtmpfs 15872628 0 15872628 0 /dev gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 10580284 5298356 67 /dev/shm gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 26984 15851656 1 /run gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 1807636 15878640 0 /sys/fs/cgroup gcgnamslap Asia Testing 10.100.158.51 /dev/md1 ext3 96922 36667 55039 40 /boot gcgnamslap Asia Testing 10.100.158.51 /dev/md6 ext3 62980468 28501072 31278452 48 /usr/sw gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p1 ext4 1126568640 269553048 800534468 26 /usr/p1 gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p2 ext4 1126568640 85476940 984610576 8  /usr/p2 Please help to create a query to get the table in the above expected manner. Your kind inputs are highly appreciated..!! Thank You..!!
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the resp... See more...
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the respective time Event 1:400 error { [-]    body: { [-]      message: [ [-]        { [-]          errorMessage: must have required property 'objectIds'          field: objectIds        }        { [-]          errorMessage: must be equal to one of the allowed values : [object1,object2]          field: objectType        }        statusCode: 400      type: BAD_REQUEST_ERROR    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }   hostname:     level: 50    msg: republish error response    statusCode: 400   time: **** }   Event 2:500 Error { [-]    awsRequestId:     body: { [-]      message: Unexpected token “ in JSON at position 98    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    msg: reprocess error response   statusCode: 500    time: *** } Event 3:Success { [-]    awsRequestId:     body: { [-]      message: republish request has been submitted for [1] ids    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    }    headers: { [+]    }    msg: republish success response    statusCode: 200    time: *** }