All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purpo... See more...
Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purposes. Here the options I have found: Use the "Splunk OpenTelemetry Collector for Kubernetes" https://docs.splunk.com/Documentation/SVA/current/Architectures/OTelKubernetes Use Cloud facilities to export the logs to Storage Accounts Use Cloud facilities to export the logs to Event Hubs Use Cloud facilities to send syslog to a Log Analytics workspace https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-syslog   references: https://learn.microsoft.com/en-us/azure/azure-monitor/containers/monitor-kubernetes https://learn.microsoft.com/en-us/azure/aks/monitor-aks https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal https://learn.microsoft.com/en-us/azure/architecture/aws-professional/eks-to-aks/monitoring https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview   Is there a way to use Cloud facilities to stream the logs directly to Splunk so that we can avoid deploying the OTEL collector? Otherwise, if we must save the logs first to a Workspace/Storage Accounts/Event Hubs and export them with Splunk via API calls with "Splunk Add-on for Microsoft Cloud Services" or with "Microsoft Azure Add-on for Splunk", which is the best/cost effective approach? Thanks a lot, Edoardo
What predefined templating variables can be used to get below details for a synthetic event,I tried #foreach ($item in ${fullEventList})#end but could get only summary and event details, How can ot... See more...
What predefined templating variables can be used to get below details for a synthetic event,I tried #foreach ($item in ${fullEventList})#end but could get only summary and event details, How can other details be fetched
Hi all, help me extracting the field from the below two events System.Exception: Assertion violated: stream.ReadByteInto(bufferStream) == 0x03 System.Exception: An error was encountered while atte... See more...
Hi all, help me extracting the field from the below two events System.Exception: Assertion violated: stream.ReadByteInto(bufferStream) == 0x03 System.Exception: An error was encountered while attempt to fetch proxy credentials for user 'xyz   system_exception=Assertion violated: stream.ReadByteInto                                      An error was encountered while attempt to fetch proxy credentials for user thanks
Hello! I have installed the kemp add-on from here: https://splunkbase.splunk.com/app/6830 . The issue is I cannot find a proper documentation on how to setup data and what sourcetype to specify in t... See more...
Hello! I have installed the kemp add-on from here: https://splunkbase.splunk.com/app/6830 . The issue is I cannot find a proper documentation on how to setup data and what sourcetype to specify in the inputs.conf . For more context, I am collecting the logs through syslog not API, so I need to specify the sourcetype in the inputs.conf for parsing to work properly.
Enterprise security is not available in Splunk cloud trial version. I need assistance for it.
I have the problem that I can't delete an input filter that I probably formulated incorrectly so that I can take it out. Error occurred attempting to remove a.b.*.*, c.d.e.0, f.g.*:5514: Malformed I... See more...
I have the problem that I can't delete an input filter that I probably formulated incorrectly so that I can take it out. Error occurred attempting to remove a.b.*.*, c.d.e.0, f.g.*:5514: Malformed IP address: a.b.*.*, c.d.e.0, f.g.*:5514. An outputs.conf under /system/local does not exist
Hi all, In my AD computer account deletion correlation search, I use _time and subjectusername in throttling fields for grouping. Is adding _time to throttling the correct approach? Please correct m... See more...
Hi all, In my AD computer account deletion correlation search, I use _time and subjectusername in throttling fields for grouping. Is adding _time to throttling the correct approach? Please correct me if I'm wrong. query  index=win sourcetype=XmlWinEventLog EventCode=4743 | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventDescription, values(TargetUserName) as deleted_computer,  dc(TargetUserName) as computeruser_count by _time SubjectUserName | where computeruser_count > 20 Time Range set to  Earliest Time 20m@m latest now cron schedule */15 * * * * Scheduling  set to Continuous Throttling  window duration 12 hours Fields to group by SubjectUserName , _time Thanks in Advance..  
Hi Splunkers, today I have a very strange case to manage. I'm going to try right now to be more clear possible. The scenario is a full on prem Splunk Enterprise environment, with many components. F... See more...
Hi Splunkers, today I have a very strange case to manage. I'm going to try right now to be more clear possible. The scenario is a full on prem Splunk Enterprise environment, with many components. For this customer, we are not the starting provider; another company was on charge before us and developed a full custom app. About this application: No doc has been shared by previous provider It states now some error messages that are not completely clear. So, in a nutshell, we have to try to understand why we got those errors and try to fix them. Now of course I'm not here to ask you "Ehy magic guys, give me the magic solution!"; the purpose of this topic is ask your help to understand data we have (we have only a GUI little dashboard with a short app description and how it works) and try to understand how we can fix those errors. The app analyze Indexers and their indexes. Its purpose is to understand if indexes are retaining the correct amount of historical data; do achieve this, it investigate the index retention status. So, how this investigation is done? The app analyze the currentTimePeriodDay value against the frozenTimePeriodDay. To state if an error is found, the app consider 2 possible cases: currentTimePeriodDay > frozenTimePeriodDay + 45: this case is considered unhealthy because indexes are retaining more historical data than expected currentTimePeriodDay < frozenTimePeriodDay:  this case is considered unhealthy because indexes are retaining insufficient historical data. For both cases, the suggested workaround is a generic retention and disk space settings tuning. Of course there are more specific error message for each index on every Indexers (we have a menu to select specific Indexers) but this, by my point of view, is a further analysis step; what is not clear, for my team and me, is the foundation logic of app. I mean: how comparison between currentTimePeriodDay and frozenTimePeriodDay should help us to check a good index retention? How are they related? Why if one of them is greater than the other one, this could be an unhealthy symptom? 
Hi. I have to upgrade a splunk environment from Splunk 7.2.4.2 to 9.1. I don't have the option to migrate to a new cluster. The upgrade readiness app is not available for our current version. I know... See more...
Hi. I have to upgrade a splunk environment from Splunk 7.2.4.2 to 9.1. I don't have the option to migrate to a new cluster. The upgrade readiness app is not available for our current version. I know I need to go 7 to 8 and then to 9. In the order of Cluster Master/Indexer Peers..../Search Peers/Deployer/Deployment/UF's and HF's. Can anyone offer any input on what may catch me out in the process?   Thanks  
How can i Truncate the log description after 20 words in splunk and store in new field.
Hi Splunk experts, We have Splunk enterprise which is running on Linux. Is there any option that we can disable or skip secure inter-splunk communication for REST APIs? Please suggest me Thank you... See more...
Hi Splunk experts, We have Splunk enterprise which is running on Linux. Is there any option that we can disable or skip secure inter-splunk communication for REST APIs? Please suggest me Thank you in advance. Regards, Eshwar  
Is there any way to block logs coming from other servers, on a distributed server, with the debug level activated? I say this because our splunk is suffering performance degradation due to the amount... See more...
Is there any way to block logs coming from other servers, on a distributed server, with the debug level activated? I say this because our splunk is suffering performance degradation due to the amount of DEBUG logs. I'm still studying the props.conf documentation, would this be the right way to do this?
Since I cannot find much on querying ASUS router syslogs, and I am completely new to Splunk, I thought I'd start a thread for other Google Travelers in the far future. I installed Splunk ENT yesterd... See more...
Since I cannot find much on querying ASUS router syslogs, and I am completely new to Splunk, I thought I'd start a thread for other Google Travelers in the far future. I installed Splunk ENT yesterday and I am successfully sending syslogs. In my first self-challenge, I'm trying to build a query with just dropped packets for external IP sources, but its not working. source="udp:514" index="syslog" sourcetype="syslog" | where !(cidrmatch("10.0.0.0/8", src) OR cidrmatch("192.168.0.0/16", src) OR cidrmatch("172.16.0.0/12", src)) The Raw data is below - I wanna filter out all 192 privates and just external addresses, like that darn external HP src IP (15.73.182.64). Feb 4 08:46:36 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43798 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D135F84C3294ECB) MARK=0x8000000 Feb 4 08:46:37 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43799 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136188C3294ECB) MARK=0x8000000 Feb 4 08:46:38 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43800 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136590C3294ECB) MARK=0x8000000 Feb 4 08:46:40 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=50 ID=43801 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D136DA0C3294ECB) MARK=0x8000000 Feb 4 08:46:44 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43802 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D137DC0C3294ECB) MARK=0x8000000 Feb 4 08:46:52 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43803 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D139E00C3294ECB) MARK=0x8000000 Feb 4 08:47:09 kernel: DROP IN=eth4 OUT= MAC=04:42:1a:51:a7:70:f8:5b:3b:3b:bd:e8:08:00 src=15.73.182.64 DST=192.168.1.224 LEN=82 TOS=0x00 PREC=0x00 TTL=49 ID=43804 DF PROTO=TCP SPT=5222 DPT=24639 SEQ=120455851 ACK=2704633958 WINDOW=23 RES=0x00 ACK PSH URGP=0 OPT (0101080A1D13DE80C3294ECB) MARK=0x8000000 Feb 4 08:47:17 kernel: DROP IN=eth4 OUT= MAC=ff:ff:ff:ff:ff:ff:28:11:a8:58:a6:ab:08:00 src=192.168.1.109 DST=192.168.1.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=41571 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x8000000 Next question - would anyone be able to write an app that takes the external IPs and does a lookup against the AbusePDB API or other blacklist APIs?  
Dear Splunkers ,  May I ask for help please~ I have a dashboard like below , I need someone give me some suggestion , to add a button on action fields when button clicked, then change the status... See more...
Dear Splunkers ,  May I ask for help please~ I have a dashboard like below , I need someone give me some suggestion , to add a button on action fields when button clicked, then change the status filed content to "Ack" thank u all ,  <dashboard version="1.1" theme="dark" script="test.js"> <label>111</label> <row> <panel> <table> <search> <query>|makeresults count=5 | eval A=random(), B=random(), status="", action="Ack/UnAck"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>
when I go to search head to change configuration of TA_vectra_detect_json I find this (You do not have permissions to edit this configuration.)   
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 48... See more...
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 484 etc.... the case is like httpscode is 200 it should be treated as success count and others should be treated as failure count.... finally the statistics table should show values of success,failure and average response time....
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "S... See more...
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "Splunk recommends that customers use version 9.2.0.1 instead of version 9.2.0." Release Notes However, in the download link (Splunk Enterprise Download Page), the latest version available is 9.2.0. Could you please inform us when Splunk Enterprise 9.2.0.1 will be released?
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exe... See more...
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exec-153205] - Storage TRASH_AND_BINARIES garbage collector report: Total execution time:    15.25 minutes Candidates for deletion: 4,960 Checksums deleted:       4,582 Binaries deleted:        4,582 host = hostname.com index = XXXXXX1 source = artifactory-servicesourcetype = artifactory-service How I can display trend/timechart of "Total execution time" using splunk query group by timestamp and host name for Storage TRASH_AND_BINARIES garbage collector report? I appreciate any help. Thanks Rahul  
hello all! is there a default time that events (containers/cases) are stored in the SOAR server to approach to? and if so, can I change the time? @phanTom  Thank you in advance
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do... See more...
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do this, do i need a Red Hat subscription for this?