All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I have one problem : Splunk ver 9.1.1 pymqi Version: 1.12.10 client MQ ver. 9.2 when I download messages from 'IBM MQ' I receive the following error:   ERROR ExecProcessor [678315 ExecProce... See more...
Hi I have one problem : Splunk ver 9.1.1 pymqi Version: 1.12.10 client MQ ver. 9.2 when I download messages from 'IBM MQ' I receive the following error:   ERROR ExecProcessor [678315 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mq/bin/mqinput.py" Exception occurred while handling response output: 'ascii' codec can't decode byte 0xf1 in position 0: ordinal not in range(128) mqinput_stanza:mqinput://TEST_MQ    
Question is what logs are you ingesting from that host. If you have TA_nix installed, you can use uptime.sh input to get system's uptime.
You should extend the search to cover a period of time and count how many times the test succeeds and and how many times it fails. From this, you can work out a percentage success rate. Use this in y... See more...
You should extend the search to cover a period of time and count how many times the test succeeds and and how many times it fails. From this, you can work out a percentage success rate. Use this in your alert.
Hi Splunkers, in our environment we are collecting Microsoft Windows logs that, currently, come in xml format. Customer demand us to switch off xml: on Splunk Console, he want to see logs in legacy/t... See more...
Hi Splunkers, in our environment we are collecting Microsoft Windows logs that, currently, come in xml format. Customer demand us to switch off xml: on Splunk Console, he want to see logs in legacy/traditional format and not xml one. I don't remember how this can be achieved; am I wrong or I have to change a parameter in addon configuration? By the way, those are essential data for our scenario: Collection mode: UF installed on each Windows data sources. Logs are sent to an HF and then to a Splunk SaaS, so final flow is: log sources (with UF) -> HF -> Splunk SaaS (both Core and ES). UF management: all UF are managed with a Deployment Server Addon used: the "classic" one, https://splunkbase.splunk.com/app/742  Addon installation: on both SaaS env and HF
Hi, I have a Nanalog Streaming Service (NSS) from Zscaler that I have connected to Splunk.  The problem is it doesn't show any data on the dashboard. But if I look into 'search' , there is data. ... See more...
Hi, I have a Nanalog Streaming Service (NSS) from Zscaler that I have connected to Splunk.  The problem is it doesn't show any data on the dashboard. But if I look into 'search' , there is data. This is the feed output format I have configured on the Zscaler Admin Portal for my splunk feed : "%s{time}","%s{login}","%s{proto}","%s{eurl}","%s{action}","%s{appname}","%s{appclass}","%d{reqsize}","%d{respsize}","%s{urlclass}","%s{urlsupercat}","%s{urlcat}","%s{malwarecat}","%s{threatname}","%d{riskscore}","%s{dlpeng}","%s{dlpdict}","%s{location}","%s{dept}","%s{cip}","%s{sip}","%s{reqmethod}","%s{respcode}","%s{ua}","%s{ereferer}","%s{ruletype}","%s{rulelabel}","%s{contenttype}","%s{unscannabletype}","%s{deviceowner}","%s{devicehostname}","%s{keyprotectiontype}" Any suggestions on how I can further troubleshoot?  
Hi,   Currently I have a browser test set up and I would like it so that if the uptime falls below 98% I want an email sent out to certain people. However, now the alerting with uptime works on an ... See more...
Hi,   Currently I have a browser test set up and I would like it so that if the uptime falls below 98% I want an email sent out to certain people. However, now the alerting with uptime works on an per test basis, in that the uptime is either 100 or 0 if a test fails. How can I set it so that the uptime views the uptime throughout a period of time and not per test? The Image below might show better what I mean with what I currently have.  
I have installed akamai add on for splunk in our HF.   https://splunkbase.splunk.com/app/4310   I followed the documentation but after installing the add on not seeing any option to add api input... See more...
I have installed akamai add on for splunk in our HF.   https://splunkbase.splunk.com/app/4310   I followed the documentation but after installing the add on not seeing any option to add api input. It shows only dashboard. Not seeing any option called “ ​Akamai​ Security Incident Event Manager API” under data inputs. So, not able to ingest data.   Can anyone help here please.  
I have search result outputs as the following, tactic technique searchName Data from Information Repositories collection search Name A Valid Accounts persistence search Name B Us... See more...
I have search result outputs as the following, tactic technique searchName Data from Information Repositories collection search Name A Valid Accounts persistence search Name B Use Alternate Authentication Material: Pass the Ticket lateral movement search Name C   and so on... I need to add a dashboard panel as shown below     Need help in the search query for my dashboard panel where the count of the number of custom searches created is displayed for every technique.
Hello Ismo, inputs.conf definition looks like this: [monitor:///home/sicpa_operator/deploy/PROD/machine/monitoring/*production_statistics.csv] index = sts disabled = false sourcetype = STSLOGM... See more...
Hello Ismo, inputs.conf definition looks like this: [monitor:///home/sicpa_operator/deploy/PROD/machine/monitoring/*production_statistics.csv] index = sts disabled = false sourcetype = STSLOGMPPS crcSalt = <SOURCE> by *production_statistics.csv I make sure all the files have to be synced they only contain different dates at the beginning of each file name. Seems I´m able sync only the files by the deployment date. Means files from date when UF been deployed are synced but the everything before not. BR
Hi, I have a dashboard that uses HTML links for logging on to devices via VNC, SSH, SCP etc. After a short maintenance on the server, now it does not recognise all of them as links anymore.  ssh2 a... See more...
Hi, I have a dashboard that uses HTML links for logging on to devices via VNC, SSH, SCP etc. After a short maintenance on the server, now it does not recognise all of them as links anymore.  ssh2 and vnc2 are fine, but the rest are no go... Neither chrome nor edge nor firefox are seeing them as links.      <panel> <html> <p align="center"> <a href="vnc:$exampleIp$" target="_blank">VNC</a> </p> <p align="center"> <a href="scp://admin:password@$exampleIp$" target="_blank">SCP</a> </p> <p align="center"> <a href="http://$exampleIp$" target="_blank">WEB</a> </p> <p align="center"> <a href="ssh2://admin:password@$exampleIp$/">SSH</a> </p> <p align="center"> <a href="vnc2:$exampleIp$:5901" target="_blank">VNC2</a> </p> </html> </panel>       Inspected element:  
I am collecting logs from an Ubuntu server (16.04) using Splunk and would like to create an alert for when the Ubuntu system restarts. Does anyone know which logs or events I can utilize to trigger a... See more...
I am collecting logs from an Ubuntu server (16.04) using Splunk and would like to create an alert for when the Ubuntu system restarts. Does anyone know which logs or events I can utilize to trigger an alert when the Ubuntu server restarts?
Hi as you have Mac with Apple silicon and you are trying to install Splunk into linux which are running in Mx it didn't work until Splunk (hopeful) will deliver ARM splunk version for us. You can ru... See more...
Hi as you have Mac with Apple silicon and you are trying to install Splunk into linux which are running in Mx it didn't work until Splunk (hopeful) will deliver ARM splunk version for us. You can run Splunk on Apple Silicon only in macOS with rosetta2. I have heard some rumours that you can use docker with somehow to use linux x86_64 binaries too, but haven't seen or used it by myself. r. Ismo
This is one course which you should take, if you are responsible to define monitoring etc. for splunk https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-data-administration-course-descripti... See more...
This is one course which you should take, if you are responsible to define monitoring etc. for splunk https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-data-administration-course-description.pdf  
Hi Are you sure that you haven't set this? ignoreOlderThan  Can you post your inputs.conf for this source, so we can check if there is something else which can cause this behaviour? r. Ismo
If/when you have issues with lookups (e.g. time by time you found old lookups on SHC), you should check this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationcha... See more...
If/when you have issues with lookups (e.g. time by time you found old lookups on SHC), you should check this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges#Preserve_lookup_files_across_app_upgrades r. Ismo
Hi 1st it's best to use some real syslog server instead of Splunk UF/HF even you can use also Splunk for that. For PoC you can use also Splunk, but in production you should switch this to something ... See more...
Hi 1st it's best to use some real syslog server instead of Splunk UF/HF even you can use also Splunk for that. For PoC you can use also Splunk, but in production you should switch this to something else. Post under 1024 cannot used unless you are sunning process as root. You shouldn't run splunkd as root. For that reason you must switch port to e.g. 1514 or something similar and also configure SolarWindsSEM to use it.  r. Ismo
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I... See more...
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I found that while I can execute curl commands from the OS, I am unable to do so from Splunk. Additionally, I am unable to access Splunkbase via Splunk Web. Is there a best practice for working with Splunk in a proxy environment?      
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM.... See more...
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM. Both Splunk HF and SolarWinds are using free licenses.   SolarWinds has performed the forwarding configuration via the admin console. In the Splunk HF Inputs.conf file, details have been added as below: [TCP://514] connection_host = X.X.X.93 sourcetype = * disabled = false index = SolarWinds-index   Both instances are running on the AWS cloud, same subnet. When I check the Splunk HF interface with the Tcpdump command, I receive the following output: Splunk Host Name - ip-X-X-X-72.ap-southeast-1.compute.internal SolarWinds Host Name - ip-X-X-X-93.ap-southeast-1.compute.internal   00:58:05.726708 IP ip-X-X-X-72.ap-southeast-1.compute.internal.shell > ip-X-X-X-93.ap-southeast-1.compute.internal.36044: Flags [R.], seq 0, ack 3531075234, win 0, length 0 00:58:05.727636 IP ip-X-X-X-93.ap-southeast-1.compute.internal.36054 > ip-X-X-X-72.ap-southeast-1.compute.internal.shell: Flags [S], seq 3042331467, win 64240, options [ 1460,sackOK,TS  1136916397  0,nop,wscale 7], length 0   Splunk HF is receiving logs from the Universal Forwarder (UF) on the Windows server but didn't from SolarWinds SEM.   Can anyone advise on this issue?  
First, do NOT enable all ES correlation searches.  That will cause more problems than it will solve.  Enable only the correlation searches that pertain to your use cases and for which you have data i... See more...
First, do NOT enable all ES correlation searches.  That will cause more problems than it will solve.  Enable only the correlation searches that pertain to your use cases and for which you have data ingested in Splunk. Where a TA should be installed depends on what the TA does.  The installation instructions for the TA should specify the location.  If it doesn't use the "Where to install" I link I provided earlier.  Generally speaking, it can't hurt to install a TA on both indexers and UFs. Splunkbase is the source for most Splunk TAs.  Others can be downloaded from the vendors that created them for their products.  Still others are available from GitHub.  It can be difficult to locate a TA without knowing the name, however.  What do you want the TA to do?  Perhaps we can help you find something appropriate.
The data will be retained in Splunk for as long as it's been configured to stay, so although your dashboard may be searching data for the last 30 days, it may be the data is there for longer. Genera... See more...
The data will be retained in Splunk for as long as it's been configured to stay, so although your dashboard may be searching data for the last 30 days, it may be the data is there for longer. Generally the approach to your problem is to look at summary indexing. What people often do is to ingest data from their sources and then do aggregations on those source and save aggregations to a summary index. The main index with all the data is then retained for a short period, whereas the smaller data volume is configured to be retained for a longer period so it can be used for long term analysis. Look at reports/summary indexing which can do summary indexing automatically and also the collect SPL command allows you to do it manually. When people ask the question about whether something is possible, the answer and almost always yes and often there is more than one way. As for dashboarding, that's the easy part - if you have prepared your data, then you can do what you like on that data, as long as you have it.