All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I'm still very new to the world of Splunk Enterprise. I hope that you can help me with my problem. I created the following search to be notified of app updates by email: | re... See more...
Hello everyone, I'm still very new to the world of Splunk Enterprise. I hope that you can help me with my problem. I created the following search to be notified of app updates by email: | rest /services/apps/local | search update.version != "" | rename title AS Update_APP, version AS Update_Version, update.version AS Update_Versionupdate | table Update_APP Update_Version Update_Versionupdate The notification type is scheduled to run every day at 12:00 p.m. I chose one as a trigger. However, I get the same ban notification email every day, even though I've already received it. What do I have to do so that the message is only sent once. Please excuse my bad English. Best regards Björn
Hi All, We want to ingest ZAP(Zero-hour auto purge) logs into Splunk. We are using Splunk Add-on for Microsoft Office 365 app. So, is there a way to ingest ZAP(Zero-hour auto purge) logs using t... See more...
Hi All, We want to ingest ZAP(Zero-hour auto purge) logs into Splunk. We are using Splunk Add-on for Microsoft Office 365 app. So, is there a way to ingest ZAP(Zero-hour auto purge) logs using the mentioned Splunk App. If not is there any other Splunk app that would help us in ingesting ZAP(Zero-hour auto purge) logs. Please help me with the above information.
Hello, I'm encountering the following issue on one of my indexers (from a total of 3) after downgrading from 8.3.3 to 8.1.6. All my other components (3SH,CM,MC,Deployer, Indexer2 and 3 are working ... See more...
Hello, I'm encountering the following issue on one of my indexers (from a total of 3) after downgrading from 8.3.3 to 8.1.6. All my other components (3SH,CM,MC,Deployer, Indexer2 and 3 are working fine after downgrading.) I tried pretty much everything to kill the process, restart splunk, restart the instance on the cloud, nothing seems to help. splunkd is not running.
Hi, In all snapshots > Business Transaction is showing as (Not Found(id:350432)) and all calls are shown as "stall". I have been able to disable the stall and enable it again after 10 minutes. ... See more...
Hi, In all snapshots > Business Transaction is showing as (Not Found(id:350432)) and all calls are shown as "stall". I have been able to disable the stall and enable it again after 10 minutes. any idea how I can fix this? ^ Post edited by @Ryan.Paredez for minor formatting
My requirement is to get the rate of change of a certain parameter if its corresponding alert gets triggered. To add more details, we have a log file that logs the backlog of database. Once the back... See more...
My requirement is to get the rate of change of a certain parameter if its corresponding alert gets triggered. To add more details, we have a log file that logs the backlog of database. Once the backlog crosses a certain threshold we trigger an alert, however it could be a false positive since the system may be undergoing maintenance and hence backlog grows. So I want the alert to trigger another query that captures rate of growth over the last 'x' hours. This will give more context about what is happening in the system. How to achieve this in splunk? Please share your ideas
外れ値グラフで外れ値が検出されたときにダッシュボードを赤に変える あなたが方法を知っている場合は、私に知らせてください。
Hello everyone, I'm pretty new to Splunk and mostly learning as I go, so please bear with me if this is a common question or an easy answer as I'm still figuring out alot of things I'm bu... See more...
Hello everyone, I'm pretty new to Splunk and mostly learning as I go, so please bear with me if this is a common question or an easy answer as I'm still figuring out alot of things I'm building a specific search string that will seperate 1 field of information, with 5 different unique field names, counting them, and mapping this data to build a trending chart. Our data is pulled in on a daily basis. My search query works so far (although it's probably not optimized), and I'm now moving forward into the formatting stage. What I want is to ensure my chart can work off of our main dashboard that has a time picker, so that we can see the trending of our data from day, month, year, etc. My query is working, but what I'm encountering is that in the chart the data will load in on a daily mapping no matter what filter is set. This is fine on a weekly, or daily filter, but when I want to view this with larger sets of data such as monthly or yearly, this comes out a bit messy. Is it possible to tweak the search string so that when the data is viewed with a monthly filter, it will give the the values from the month and put the highest amount on the chart instead of every day of the month? If not, I think the other solution may just be to make a separate chart for a monthly view. That's fine too, but just thought I would ask! Thank you in advance and screenshot is below  showing what I see when changing to a "monthly" view along with a snippet of the search string.     | stats count(eval(severity=="Low")) AS Low by _time | chart values(Low) over _time        
We have onboard a firewall log from Forcepoint, and they were not parsing properly in Splunk. We try to find add-on to ingest the log but we found none. Is there any way we can do to solved this issu... See more...
We have onboard a firewall log from Forcepoint, and they were not parsing properly in Splunk. We try to find add-on to ingest the log but we found none. Is there any way we can do to solved this issue. Here is example for our current fw log; Feb 17 10:25:09 172.XX.XX0.XX0 "2022-02-17 10:25:51","3350841932","172.XX.XXX.XXX","Packet Filtering","Notification","New connection","Allow","123.XXX.XXX.XX","113.XX.XXX.XXX","DNS (UDP)","17","52129","53","4372.39","123.XXX.XXX.XXX","17X.XXX.XXX.XX","52129","53",,"129",,,,,,,,,,,,,,"DC-Node-01",,"2097953.17",,,"2022-02-17 10:25:51","Firewall","Connection_Allowed",,,"6899901665942596693",,,,   Please advise.
If you don't put a wild card when searching after extracting the field, you can't search. Field extraction is successful and field verification is possible when searching in index and sourcetyp... See more...
If you don't put a wild card when searching after extracting the field, you can't search. Field extraction is successful and field verification is possible when searching in index and sourcetype.   myfield=aaaa -> no search myfield=*aaaa* -> search ok   It works like this in all fields of a specific index.
Summary: When using the table command, values are dropped if { is the first character.     index=someindex host="VVV" source=somesource earliest=-24h  action           NOT( AC... See more...
Summary: When using the table command, values are dropped if { is the first character.     index=someindex host="VVV" source=somesource earliest=-24h  action           NOT( ACTION ="SUMMARY" OR ACTION="RESULT")           | dedup ID         |rename ID as "Rcrds Prcssd To Date"           | rename EVENT_DT as "Date Time" EVENT as "API EVENT"           |convert ctime(_time) as RunDate timeformat="%m/%d/%Y %H:%M %p"           |table ID,RunDate,ACTION, "API EVENT"           |SORT -ID   When the "API EVENT" field has a { starting value, the remaining values are dropped. If I replace  |table ID,RunDate,ACTION, "API EVENT" with |fields ID,RunDate,ACTION, "API EVENT" I see the { and the remaining values for "API EVENT"   Why is the table comm, and dropping values?
Query: index=xxx source=Perfmon:LogicalDisk host=$h$ ( counter="Disk Reads/sec" OR counter="Disk Writes/sec" ) | eval read_ops=IF(counter="Disk Reads/sec",Value,0) | eval write_ops=IF(counter="Disk... See more...
Query: index=xxx source=Perfmon:LogicalDisk host=$h$ ( counter="Disk Reads/sec" OR counter="Disk Writes/sec" ) | eval read_ops=IF(counter="Disk Reads/sec",Value,0) | eval write_ops=IF(counter="Disk Writes/sec",Value,0) | eval tot_ops=write_ops+read_ops | fields read_ops write_ops tot_ops | timechart max(read_ops) max(write_ops) max(tot_ops) Need to sum the read_ops and write_ops into field total ops for each time interval (1 min) for a timechart. Because the writes ops and read ops values are in separate rows per time interval. example below: 2/16/22 5:29:59.000 PM   02/16/2022 17:29:59.224 -0500 collection=LogicalDisk object=LogicalDisk counter="Disk Writes/sec" instance=_Total Value=27.222955244825506 Collapse     2/16/22 5:29:59.000 PM   02/16/2022 17:29:59.224 -0500 collection=LogicalDisk object=LogicalDisk counter="Disk Reads/sec" instance=_Total Value=5.316598854323969 Collapse  
I would like to list results from two events that are linked via common field (system_id), but searched via value only found in one event. Event1: client phone_number request_type system_id ... See more...
I would like to list results from two events that are linked via common field (system_id), but searched via value only found in one event. Event1: client phone_number request_type system_id Event2: client_bank bank_request response_code system_id Both events share the same system_id, however, I only know the phone_number and need to use that to list both events. Any help would be greatly appreciated.
Hi , I am facing a weird issue - where on a Splunk indexer I am trying to filter out log events using props and transforms file.  I have noticed that filtering of log events works perfectly fine ... See more...
Hi , I am facing a weird issue - where on a Splunk indexer I am trying to filter out log events using props and transforms file.  I have noticed that filtering of log events works perfectly fine for sourcetypes which are not defined or do not exsist in Splunk default config. For example Okta, Jenkins,fluentd etc. As soon as I try to filter IIS/Catalina sourcetype - it never works .  For example this is my props - which is filtering journald sourcetype but not iis Props  [iis] TRANSFORMS-routing = setnull [journald] TRANSFORMS-routing = setnull Transforms  [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue  
Does anyone know where I can find some already created Splunk use cases for github webhook logs? I am having a really hard time googling for a dump of github based splunk searches because of the ke... See more...
Does anyone know where I can find some already created Splunk use cases for github webhook logs? I am having a really hard time googling for a dump of github based splunk searches because of the keyword github. I am trying to look for commits in github with no approvals. I have identified the search for all commits and the search for finding approvals for those commits but I am unsure how to stich them together in a single query to produce actionable results. The commit log and the approval log are separate logs but both have a unique identifier for the commit. More info: Here is the query for the approval and the corresponding log. These logs are heavily redacted and I am only including what is relevant. Logs come in through HEC so they are JSON. index=github action=submitted review.state=approved pull_request.head.sha!="" { action: submitted pull_request: { head: { sha: <commit-id> } } review: { state: approved } } Here is the log of the merge, it has no action so I'm using this query: index=github after!="" { after: <commit-id> before: <previous-commit-id> enterprise: {} head_commit: {} organization: {} pusher: {} repository: {} sender: {} } I've been trying to create a table that includes both of these logs with no luck. index=github after!="" [search index=github action=submitted review.state=approved pull_request.head.sha!="" |table pull_request.head.sha review.state | rename pull_request.head.sha as commit-id] |table after |rename after as commit-id So I am essentially looking for commit logs with no approval and trying to link the tables together with after/pull_request.head.sha as both of these values are unique commit ID's. Ideally I would want to alert on each occurrence of an unapproved merge.
Below is the usual Splunk Search line in addressVal is not equal to outAddressVal. I tried below Search but it did not help index= * addressVal outAddressVal| where (rtrim(ltrim('addressVal ')) ... See more...
Below is the usual Splunk Search line in addressVal is not equal to outAddressVal. I tried below Search but it did not help index= * addressVal outAddressVal| where (rtrim(ltrim('addressVal ')) != rtrim(ltrim('outAddressVal'))) Content line is like below addressVal = WV ,outAddressVal= RA addressVal = CA,outAddressVal= RA addressVal = WV ,outAddressVal= RA addressVal = WV ,outAddressVal= RA
We are about to start ingesting Windows process command line arguments. Within the Microsoft article, it states that "Command line arguments can contain sensitive or private information such as passw... See more...
We are about to start ingesting Windows process command line arguments. Within the Microsoft article, it states that "Command line arguments can contain sensitive or private information such as passwords or user data." How did anyone resolve this? Did you just restrict who can open the security logs? Did you clear the security logs after a certain timeframe?
Hello everyone,  My client wants to have access from the monitor console and HF to update certain apps and so on. Each time they try to do a connection they get the message:  Application close by... See more...
Hello everyone,  My client wants to have access from the monitor console and HF to update certain apps and so on. Each time they try to do a connection they get the message:  Application close by the peer 02-13-2022 01:00:01.238 -0500 ERROR ApplicationUpdater [2040833 ApplicationUpdateThread] - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: Connection closed by peer If I ran a curl from the server I got connection established: So it's not a firewall issue. Do I have to configure something at the splunk side?  Splunk Enterprise: 8.2.2 - over x86_64 x86_64 GNU/Linux Thank you for your help.
Hey everyone! I've spent a good few hours here learning the basics of creating custom packages to load into our Splunk Cloud instance. In the process, I've started playing around with Splunk AppIns... See more...
Hey everyone! I've spent a good few hours here learning the basics of creating custom packages to load into our Splunk Cloud instance. In the process, I've started playing around with Splunk AppInspect and Splunk Packaging Toolkit. Thing is, these were bombing hard for me due to trying to build out of my git working directory. Every file in .git basically triggered the validator's fail state. Eventually I found an answer in creating a .slimignore file. The only reference I could find to this file was here: https://dev.splunk.com/enterprise/reference/packagingtoolkit/packagingtoolkitcli#slim-package The manual for slim also mentioned that this should be in the root of the app's development folder. So this leads me to a fairly basic question: Why isn't the standard .git structure included in the default ignore file? It seems this would make overall development easier.. (Also interesting that it says /local is ignored, but the actual ignore file in the library only lists Python, Jetbrain, OSX, and Windows Thumbnail files to ignore...)
We're deploying the Windows Universal Forwarder add-on to our environment and are using a gMSA. We have configured the basic permissions outlined here Choose the Windows user Splunk Ent... See more...
We're deploying the Windows Universal Forwarder add-on to our environment and are using a gMSA. We have configured the basic permissions outlined here Choose the Windows user Splunk Enterprise should run as - Splunk Documentation. While we are now getting event log data ingested into Splunk Enterprise we do not see all the event log data. I believe we're missing Security. Is there any extra security permissions we're missing?    
Hi I want to understand how the _time set using App: Microsoft Azure Add-on for Splunk source type azure:eventhub cat ./etc/apps/TA-MS-AAD/default/props.conf [azure:eventhub] SHOULD_LINEMERGE... See more...
Hi I want to understand how the _time set using App: Microsoft Azure Add-on for Splunk source type azure:eventhub cat ./etc/apps/TA-MS-AAD/default/props.conf [azure:eventhub] SHOULD_LINEMERGE = 0 category = Splunk App Add-on Builder pulldown_type = 1 #################### # Metrics ####################   [splunk@ilissplsh04 ~]$ cat ./etc/apps/TA-MS-AAD/local/props.conf [azure:eventhub] TRUNCATE=0 [splunk@ilissplsh04 ~]$ I got an event with old _time even the event got indexed today ( indextime)