All Topics

Top

All Topics

Need to create a user in Splunk ITSI with below access - Read-only access to all glass table and dashboards No export functionality enabled No drilldown further functionality available. When I am... See more...
Need to create a user in Splunk ITSI with below access - Read-only access to all glass table and dashboards No export functionality enabled No drilldown further functionality available. When I am adding certain capabilities to the role, it is restricting me to access ITSI only. Can someone please suggest the best way to work this out?
I have some custom metrics and all of them work ok on my SAAS, when I'm going to test it, work ok: When I going to create the dashboard, I can create the graphics, but always with the same query... See more...
I have some custom metrics and all of them work ok on my SAAS, when I'm going to test it, work ok: When I going to create the dashboard, I can create the graphics, but always with the same querys the "No data available" The other queries are exactly the same and the graphics haven´t any problem, the difference is just in the final part where we change 0 by 3 The outcome is just a integer number and Appdynamics show me like a succesfull test What could be the problem?
Hello,   index=* "My-Search-String" |rex "My-Regex"| eval Status=if(like (my-rex-extractor-field,"xxx-yyyy%"), "FILE_DELIVERED", "FILE_NOT_DELIVERED")|table Status I need to run the above bet... See more...
Hello,   index=* "My-Search-String" |rex "My-Regex"| eval Status=if(like (my-rex-extractor-field,"xxx-yyyy%"), "FILE_DELIVERED", "FILE_NOT_DELIVERED")|table Status I need to run the above between 5-7 AM alert via email. Although the file arrives around 05:15 AM, I want to continue running this as an alert until 07 AM because the alert should continue to state the status to avoid missing and this will be detrimental if the status continues to be FILE_NOT_DELIVERED But the problem here is the alert continues to output FILE_NOT_DELIVERED albeit containing FILE_DELIVERED in the ouput Current behaviour - when the alert triggers at 05:45 AM - alert set to run as cron schedule - every 15 mins FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED Expected behaviour as soon as the SPL finds FILE_DELIVERED, for all subsequent runs the FILE_NOT_DELIVERED result should be suppressed and the SPL should continue to return FILE_DELIVERED How do I achieve this please?    
Can Splunk ingest log data from HCL Domino and Notes?
The search they are running is index=* cloudtrail<bucketnumber>* across a 7 day period. Environment Details: We are using the Splunk Add-on for AWS on a search head cluster, On-prem. On rev... See more...
The search they are running is index=* cloudtrail<bucketnumber>* across a 7 day period. Environment Details: We are using the Splunk Add-on for AWS on a search head cluster, On-prem. On review of the inspect job log, it looks like one user's search is reaching out to source=s3://<aws smart store info> and the other users search is only searching the local indexes. Resulting in a drastic event return difference of 76 results vs 8500 during the same time period. Steps I've taken: I checked the app they are searching in and roles for each user (they are identical) I checked the user folder in Splunk, their settings are the same even down to the time zone.  Even tried adding the index name to the search and having the user with missing logs re-run it. Still no change in her results and the job logs are showing it is not reaching out to S3.  Is there something I am missing? Is this an AWS app setting that I need to adjust? I would appreciate any thought you may have on this. Thanks! 
Hi. Colleagues. Somebody help me? I have this query by current day (figure 1) index=xxxx sourcetype=xxx earliest=-d@d latest=now |table control indicador cumplimiento| sort control |chart values... See more...
Hi. Colleagues. Somebody help me? I have this query by current day (figure 1) index=xxxx sourcetype=xxx earliest=-d@d latest=now |table control indicador cumplimiento| sort control |chart values(cumplimiento) over control by indicador the fields are: Empleo, Huérfanas, Uso, Control _time and Month My problem is, I need also show data as showing in figure 2 (by month), but I don´t find the way to show as the figure (with month on top)    Somebody were a similar problem?
I using the OpenTelemetry Collector to receive and export logs to my Splunk Cloud Instance. I have a AWS lambda which polls data and runs a OpenTelemetry Lambda layer which receives the logs in the ... See more...
I using the OpenTelemetry Collector to receive and export logs to my Splunk Cloud Instance. I have a AWS lambda which polls data and runs a OpenTelemetry Lambda layer which receives the logs in the OTLP format and exports it to Splunk cloud Instance using HEC exporter. Below is the configurations for otel receivers: otlp: protocols: http: exporters: splunk_hec: token: ${SPLUNK_TOKEN} endpoint: ${HEC_ENDPOINT} # Source. See https://docs.splunk.com/Splexicon:Source source: "otel" # Source type. See https://docs.splunk.com/Splexicon:Sourcetype sourcetype: "otel" service: pipelines: logs: receivers: [otlp] exporters: [splunk_hec] Now, the problem is the splunk_hec exporter fails to send the logs to my splunk cloud Instance. I get the below errors max elapsed time expired Post "https://inputs.prd-p-gxyqz.splunkcloud.com:8088/services/collector/event": EOF max elapsed time expired Post "https://inputs.prd-p-gxyqz.splunkcloud.com:8088/services/collector/event": context deadline exceeded Now can you please help me identify the issue. Also, what exactly should be my HEC Endpoint URL? The documentation says the format should be <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> But the above format doesn't work.
I've set up Splunk on one of my EC2 instances and created an AMI from it. However, when I launch new EC2 instances using this AMI, Splunk stops working on the original EC2 instance. What could be cau... See more...
I've set up Splunk on one of my EC2 instances and created an AMI from it. However, when I launch new EC2 instances using this AMI, Splunk stops working on the original EC2 instance. What could be causing this issue? And it is not working on the new machine also.
Hi I have one problem : Splunk ver 9.1.1 pymqi Version: 1.12.10 client MQ ver. 9.2 when I download messages from 'IBM MQ' I receive the following error:   ERROR ExecProcessor [678315 ExecProce... See more...
Hi I have one problem : Splunk ver 9.1.1 pymqi Version: 1.12.10 client MQ ver. 9.2 when I download messages from 'IBM MQ' I receive the following error:   ERROR ExecProcessor [678315 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mq/bin/mqinput.py" Exception occurred while handling response output: 'ascii' codec can't decode byte 0xf1 in position 0: ordinal not in range(128) mqinput_stanza:mqinput://TEST_MQ    
Hi Splunkers, in our environment we are collecting Microsoft Windows logs that, currently, come in xml format. Customer demand us to switch off xml: on Splunk Console, he want to see logs in legacy/t... See more...
Hi Splunkers, in our environment we are collecting Microsoft Windows logs that, currently, come in xml format. Customer demand us to switch off xml: on Splunk Console, he want to see logs in legacy/traditional format and not xml one. I don't remember how this can be achieved; am I wrong or I have to change a parameter in addon configuration? By the way, those are essential data for our scenario: Collection mode: UF installed on each Windows data sources. Logs are sent to an HF and then to a Splunk SaaS, so final flow is: log sources (with UF) -> HF -> Splunk SaaS (both Core and ES). UF management: all UF are managed with a Deployment Server Addon used: the "classic" one, https://splunkbase.splunk.com/app/742  Addon installation: on both SaaS env and HF
Hi, I have a Nanalog Streaming Service (NSS) from Zscaler that I have connected to Splunk.  The problem is it doesn't show any data on the dashboard. But if I look into 'search' , there is data. ... See more...
Hi, I have a Nanalog Streaming Service (NSS) from Zscaler that I have connected to Splunk.  The problem is it doesn't show any data on the dashboard. But if I look into 'search' , there is data. This is the feed output format I have configured on the Zscaler Admin Portal for my splunk feed : "%s{time}","%s{login}","%s{proto}","%s{eurl}","%s{action}","%s{appname}","%s{appclass}","%d{reqsize}","%d{respsize}","%s{urlclass}","%s{urlsupercat}","%s{urlcat}","%s{malwarecat}","%s{threatname}","%d{riskscore}","%s{dlpeng}","%s{dlpdict}","%s{location}","%s{dept}","%s{cip}","%s{sip}","%s{reqmethod}","%s{respcode}","%s{ua}","%s{ereferer}","%s{ruletype}","%s{rulelabel}","%s{contenttype}","%s{unscannabletype}","%s{deviceowner}","%s{devicehostname}","%s{keyprotectiontype}" Any suggestions on how I can further troubleshoot?  
Hi,   Currently I have a browser test set up and I would like it so that if the uptime falls below 98% I want an email sent out to certain people. However, now the alerting with uptime works on an ... See more...
Hi,   Currently I have a browser test set up and I would like it so that if the uptime falls below 98% I want an email sent out to certain people. However, now the alerting with uptime works on an per test basis, in that the uptime is either 100 or 0 if a test fails. How can I set it so that the uptime views the uptime throughout a period of time and not per test? The Image below might show better what I mean with what I currently have.  
I have installed akamai add on for splunk in our HF.   https://splunkbase.splunk.com/app/4310   I followed the documentation but after installing the add on not seeing any option to add api input... See more...
I have installed akamai add on for splunk in our HF.   https://splunkbase.splunk.com/app/4310   I followed the documentation but after installing the add on not seeing any option to add api input. It shows only dashboard. Not seeing any option called “ ​Akamai​ Security Incident Event Manager API” under data inputs. So, not able to ingest data.   Can anyone help here please.  
I have search result outputs as the following, tactic technique searchName Data from Information Repositories collection search Name A Valid Accounts persistence search Name B Us... See more...
I have search result outputs as the following, tactic technique searchName Data from Information Repositories collection search Name A Valid Accounts persistence search Name B Use Alternate Authentication Material: Pass the Ticket lateral movement search Name C   and so on... I need to add a dashboard panel as shown below     Need help in the search query for my dashboard panel where the count of the number of custom searches created is displayed for every technique.
Hi, I have a dashboard that uses HTML links for logging on to devices via VNC, SSH, SCP etc. After a short maintenance on the server, now it does not recognise all of them as links anymore.  ssh2 a... See more...
Hi, I have a dashboard that uses HTML links for logging on to devices via VNC, SSH, SCP etc. After a short maintenance on the server, now it does not recognise all of them as links anymore.  ssh2 and vnc2 are fine, but the rest are no go... Neither chrome nor edge nor firefox are seeing them as links.      <panel> <html> <p align="center"> <a href="vnc:$exampleIp$" target="_blank">VNC</a> </p> <p align="center"> <a href="scp://admin:password@$exampleIp$" target="_blank">SCP</a> </p> <p align="center"> <a href="http://$exampleIp$" target="_blank">WEB</a> </p> <p align="center"> <a href="ssh2://admin:password@$exampleIp$/">SSH</a> </p> <p align="center"> <a href="vnc2:$exampleIp$:5901" target="_blank">VNC2</a> </p> </html> </panel>       Inspected element:  
I am collecting logs from an Ubuntu server (16.04) using Splunk and would like to create an alert for when the Ubuntu system restarts. Does anyone know which logs or events I can utilize to trigger a... See more...
I am collecting logs from an Ubuntu server (16.04) using Splunk and would like to create an alert for when the Ubuntu system restarts. Does anyone know which logs or events I can utilize to trigger an alert when the Ubuntu server restarts?
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I... See more...
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I found that while I can execute curl commands from the OS, I am unable to do so from Splunk. Additionally, I am unable to access Splunkbase via Splunk Web. Is there a best practice for working with Splunk in a proxy environment?      
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM.... See more...
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM. Both Splunk HF and SolarWinds are using free licenses.   SolarWinds has performed the forwarding configuration via the admin console. In the Splunk HF Inputs.conf file, details have been added as below: [TCP://514] connection_host = X.X.X.93 sourcetype = * disabled = false index = SolarWinds-index   Both instances are running on the AWS cloud, same subnet. When I check the Splunk HF interface with the Tcpdump command, I receive the following output: Splunk Host Name - ip-X-X-X-72.ap-southeast-1.compute.internal SolarWinds Host Name - ip-X-X-X-93.ap-southeast-1.compute.internal   00:58:05.726708 IP ip-X-X-X-72.ap-southeast-1.compute.internal.shell > ip-X-X-X-93.ap-southeast-1.compute.internal.36044: Flags [R.], seq 0, ack 3531075234, win 0, length 0 00:58:05.727636 IP ip-X-X-X-93.ap-southeast-1.compute.internal.36054 > ip-X-X-X-72.ap-southeast-1.compute.internal.shell: Flags [S], seq 3042331467, win 64240, options [ 1460,sackOK,TS  1136916397  0,nop,wscale 7], length 0   Splunk HF is receiving logs from the Universal Forwarder (UF) on the Windows server but didn't from SolarWinds SEM.   Can anyone advise on this issue?  
Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best... See more...
Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best approach for comparing data on a month-by-month basis for example?  After 30 days I want to save that data, recall that data at a later date and do a comparison.  Is this even possible?   Thanks in advance.
I need inputs.conf stanza to monitor below location file.   c:\test.log