All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am currently having this issue where when I export my dashboard as a PDF, some of the Pie charts aren't showing up but their labels are. I initially thought that converting that pane... See more...
Hi everyone, I am currently having this issue where when I export my dashboard as a PDF, some of the Pie charts aren't showing up but their labels are. I initially thought that converting that panel to a report would solve this thinking it was an issue with data loading but that didn't seem to work. Any suggestions?
Hello everyone.  I have a dashboard with embedded queries that has stopped working since my update from 8.1 to 8.2.1. The query is as follows: host=sftpserver* source=/var/log/messages close bytes... See more...
Hello everyone.  I have a dashboard with embedded queries that has stopped working since my update from 8.1 to 8.2.1. The query is as follows: host=sftpserver* source=/var/log/messages close bytes read written | rex "close \"(?<filename>.[^\"]+)" |rex "written (?<writebytes>\w+)" | rex "read (?<readbytes>\w+)" | eval rfile=if(readbytes == 0, null(), filename) | eval wfile=if(writebytes == 0, null(), filename) |eval writemb=(writebytes/1024/1024) | eval readmb=(readbytes/1024/1024) | eval readmb=round(readmb,2) | eval writemb=round(writemb,2) |eval datetimestr=strftime(_time, "%Y-%m-%d %H:%M")| chart count(rfile) The query just gives a count of the number of files download from my SFTP server. Since the update, the dashboard that includes this query (and several others) now only shows data from before the time when Splunk was updated. Copying the query from the dashboard into a search, I can get the query to work in verbose mode but neither Fast nor Smart mode will return any results.
Hi Everyone,  Please, What is the search query to find: 1- The current health status of URL check for API services if (regex extracted field=200) achieved, means successful, otherwise all other ser... See more...
Hi Everyone,  Please, What is the search query to find: 1- The current health status of URL check for API services if (regex extracted field=200) achieved, means successful, otherwise all other services are failed? 2- At any selected time, the API availability percent time that each service is 200 (success)?  Thank you   
Hello! I don't normally load data into Splunk as I am primarily a front end user. However, I would like to load some of the attack datasets that Splunk has provided in Github.  attack_data/datasets/... See more...
Hello! I don't normally load data into Splunk as I am primarily a front end user. However, I would like to load some of the attack datasets that Splunk has provided in Github.  attack_data/datasets/attack_techniques at master · splunk/attack_data · GitHub Does anyone have config files for loading these windows log files posted here? My admin says they are a flat file and we do not currently have configuration files for ingesting them. Thank you so much, Cindy 
Disclaimer: This is an issue with VMware and not Splunk. But looking to see if others in the community have seen the same issue. Our infrastructure team has recently upgraded VMware to v7. We immedi... See more...
Disclaimer: This is an issue with VMware and not Splunk. But looking to see if others in the community have seen the same issue. Our infrastructure team has recently upgraded VMware to v7. We immediately started receiving an additional terabyte of logs from their hosts. They have a support case open with the vendor to figure out why, and the vendor agrees that this is not normal and we shouldn't have seen this volume spike. They're leaning more towards a misconfiguration than a bug. They're also wondering if others in the community have seen the same from VMware v7, and if so how was it handled. Has anyone who is using v7 of VMware seen the logging spike? If so, how did you resolve it? 
Hello, Can anyone kindly assist me with this item? I have multiple webservers and not all of them are forwarding the IIS logs into splunk. I have configured my input.conf  to: [monitor://G:\wwwlog... See more...
Hello, Can anyone kindly assist me with this item? I have multiple webservers and not all of them are forwarding the IIS logs into splunk. I have configured my input.conf  to: [monitor://G:\wwwlogs] disabled = 0 sourcetype = ms:iis:auto index=iislogs initCrcLength=2048 alwaysOpenFile = 1 I have attempted many different settings and scenarios. Any help is much appreciated. Thanks in advance  
We've been using the Splunk Add-on for F5 BIG-IP to forward F5 ASM events to our syslog cluster (two Linux servers) that have a Splunk UF installed and monitoring the F5 directory. This configuration... See more...
We've been using the Splunk Add-on for F5 BIG-IP to forward F5 ASM events to our syslog cluster (two Linux servers) that have a Splunk UF installed and monitoring the F5 directory. This configuration has been running for two years with no problems as Splunk ingests the ASM F5 logs/events. Today in the F5 GUI, I enabled the forwarding of bot defense logs/events to the syslog cluster using the same port and when I check the log file on the syslog cluster, I see the raw bot defense events, but when I check Splunk, I am not seeing the events at all. Any ideas as to why these raw events aren't being picked up? Thx
I tried to search this and didnt find anything so apologies if its been already answered.  If we run that URL one will get a page with a refresh button.  We hit the button and what's returned is a bu... See more...
I tried to search this and didnt find anything so apologies if its been already answered.  If we run that URL one will get a page with a refresh button.  We hit the button and what's returned is a bunch of HTML code.  Has anyone experienced this and how was this resolved? Running Splunk 8.2
Hey there, I just started with splunk. Currently I'm testing the new dashboard studio feature. I would like to count all searches with 1 or more found events on my dashboard. In case of the attach... See more...
Hey there, I just started with splunk. Currently I'm testing the new dashboard studio feature. I would like to count all searches with 1 or more found events on my dashboard. In case of the attached picture, I would like to display 3 in the upper SingleValue field. If the search for Event 2 has 0 results, i would like to display a 2, and so on. Is there a simple and scalable solution to this, if I want to add more searches to the dashboard later on? Thank you in advance!
Is it possible for individual Indexers be restarted from the MC? Please show the steps. Thanks very much.
Also is it advisable to leave them connected to internet only for short times for for example " Threat list" for Mittre-attack to gets downloaded? I get a lot of errors when updates don't get updated.
After we upgraded from 7.3 to 8.1.4, the UF can no longer read the json data. 07-21-2021 16:03:02.643 +0200 ERROR JsonLineBreaker - JSON StreamId:427620843244980635 had parsing error:Unexpected char... See more...
After we upgraded from 7.3 to 8.1.4, the UF can no longer read the json data. 07-21-2021 16:03:02.643 +0200 ERROR JsonLineBreaker - JSON StreamId:427620843244980635 had parsing error:Unexpected character while expecting ':': 'S' - data_source="/opt/uc4/srvavq/prod/share/ae/temp/toSplunk/OPT_REP/normal/ARIZON_CS_UC4_ActiveSchedJobs/P_AVQ_Aktive_Automic_Jobs_AVQPROD_0000000000000022.txt", data_host="xh515", data_sourcetype="P_AVQ_Aktive_Automic_Jobs_AVQPROD-too_small" props.conf [source::/opt/uc4/srvavq/*/share/ae/temp/toSplunk/ARIZON_CS/*/*/*.txt] INDEXED_EXTRACTIONS = json TIMESTAMP_FIELDS = Timestamp Data File like this: {"Timestamp" : 1626874253, "Anzahl" : "7", "userstamp" : "RM", "status" : "Processed", "taskname" : "task_secevt2_apply_sinstr.rm","env":"AVQPROD"} {"Timestamp" : 1626874221, "Anzahl" : "1", "userstamp" : "IMED_65454_54222", "status" : "Processed", "taskname" : "rm.set_resp","env":"AVQPROD"}
I'm having a bit of issue with my current logic. Ideally my lookup would contain three months of data, however when this search is executed I am only receiving the previous 15 mins of data. I presume... See more...
I'm having a bit of issue with my current logic. Ideally my lookup would contain three months of data, however when this search is executed I am only receiving the previous 15 mins of data. I presumed that the 'earliest' specification would only apply to the base search and not put requirements on the inputlookup, I was incorrect. In an ideal setting the base search runs every 15 minutes and pulls in recent events, lookup is appended to the results, stats recalculates latest events for public_ips, anything older than 3months is discarded, and the lookup is updated. Can anyone advise on my time settings so that the lookup appended results are not restricted to the 15min time frame in the base search?     index=firewall earliest=-15m | fields user src_host private_ip public_ip | inputlookup user_tracking.csv append=true | stats latest(_time) as latestTime by user src_host public_ip private_ip | where latestTime>relative_time(now(),"-3mon") | outputlookup user_tracking.csv      
We have a Splunk Alert set up with the following configuration: SETTINGS Alert type = Scheduled (Run on Cron Schedule) Time Range = Today Cron Expression = ***** Expires = 24 hours TRIGGER COND... See more...
We have a Splunk Alert set up with the following configuration: SETTINGS Alert type = Scheduled (Run on Cron Schedule) Time Range = Today Cron Expression = ***** Expires = 24 hours TRIGGER CONDITIONS Trigger alert when = Number of Results > 0 Trigger = Once Throttle = Ticked Suppress triggering for = 1 day TRIGGER ACTIONS When triggered - Add to Triggered Alerts - Send email The issue that we are experiencing is that if we have 3 events occur at different times throughout the day, we are only receiving an email for the first one.  Also, the following day (within the 24 hour period from the previous alert) we are not receiving any email notifications.  In all cases if I select the Splunk Alert and view the results I see all the events shown here, including those for which no email notification was received.. I believe the issue here has to do with the following settings: Trigger = Once Throttle = Ticked Suppress triggering for = 1 day From the Splunk documentation it is not clear whether all Splunk alerts would get suppressed after the first one, or just repeated Splunk Alerts for the same event.  I am assuming that it's the former as this would explain why we don't see any further email notifications until the 1 day / 24 hour period expires(?) I think changing the settings to the following: Trigger = For each result Throttle = Ticked Suppress triggering for = 1 day Will at least mean that we receive only one event in each email notification (for simultaneous alerts ... another issue that exists) but will not fix the suppressed email notifications.  Furthermore, removing the Throttle seems to just continuously alert on the same event. I want to keep the "Scheduled Alert" type (rather than "Realtime") due to the set-up that we have here and also I am unable to play around too much with the configuration in test as we do not have email notifications in this environment (only in our live environment). The goal, in case it's not yet clear from the above, is to receive a single email notification for each event.  Can you please advise / suggest the correct change that I should make to achieve this?
I have loaded a SSL Certificate on our development server (Splunk 8.1.4). I added the following to the server.conf file (based on the Splunk docs on what to add to the web.conf file): [sslConfig] ... See more...
I have loaded a SSL Certificate on our development server (Splunk 8.1.4). I added the following to the server.conf file (based on the Splunk docs on what to add to the web.conf file): [sslConfig] enableSplunkdSSL = 1 privKeyPath = $SPLUNK_HOME/etc/auth/mycerts/splunk.key serverCert = $SPLUNK_HOME/etc/auth/mycerts/splunk.pem   After restarting Splunk, I found a problem with the kvstors, and after investigating I found that mongod did not restart (running ./splunk _internal call /services/server/info |grep -i kvstore returned <s:key name="kvStoreStatus">failed</s:key>)   Running this search in Splunk:   index=_internal sourcetype=mongod   returns this error:   [main] cannot read certificate file: /opt/splunk/etc/auth/mycerts/splunk.key error:0906D06C:PEM routines:PEM_read_bio:no start line   I cannot determine why this error is being generated.
Hello, after upgrading our splunk development instance to 8.2.0, the page https://<splunk-ip>/en-Gb/manager/<app_name>/data/macros can not be found anymore in any app other than basic splunk search... See more...
Hello, after upgrading our splunk development instance to 8.2.0, the page https://<splunk-ip>/en-Gb/manager/<app_name>/data/macros can not be found anymore in any app other than basic splunk search app. It just shows "404 Not Found". Has anyone else experienced that issue? I am asking for a friend.
Hi All, I want to join two indexes and get a result.  Search Query -1 index=Microsoft | eval Event_Date=mvindex('eventDateTime',0) | eval UPN=mvindex('userStates{}.userPrincipalName',0) | eval ... See more...
Hi All, I want to join two indexes and get a result.  Search Query -1 index=Microsoft | eval Event_Date=mvindex('eventDateTime',0) | eval UPN=mvindex('userStates{}.userPrincipalName',0) | eval Logon_Location=mvindex('userStates{}.logonLocation',0) | eval Event_Title=mvindex('title',0) | eval Event_Severity=mvindex('severity',0) | eval AAD_Acct=mvindex('userStates{}.aadUserId',0) | eval LogonIP=mvindex('userStates{}.logonIp',0) | eval Investigate=+RiskyUsersBlade/userId".AAD_Acct | stats count by LogonIP Event_Date, Event_Title, Event_Severity UPN Logon_Location Investigate   Search Query - 2 index=o365 "Result of Query-1 LogonIP" earliest=-30d | stats dc(user) as "Distinct users"   If the Search Query-2 "Distinct users" results are greater than 20 then, I want to ignore the result.   @ITWhisperer @scelikok @soutamo @saravanan90 @thambisetty @bowesmana   @to4kawa @woodcock  @venkatasri   
Hi, i have a problem with a few queries. I have something actually like this:     index = nsw_prod_eximee ERROR | rex field=formInstanceNumber (?<pref>\w{3})\d{9} | rex field=applicationNumber ... See more...
Hi, i have a problem with a few queries. I have something actually like this:     index = nsw_prod_eximee ERROR | rex field=formInstanceNumber (?<pref>\w{3})\d{9} | rex field=applicationNumber (?<pref>\w{3})\d{9} | eval "Name" = case(pref=="USP", "mProtection", pref=="FGT", "mTravel", pref=="FGH", "HouseHold", pref=="FGS", "mMoto") | stats count as formInstanceNumber by "Name" | rename formInstanceNumber as "Errors"     And i have a table with a 4 values: But now i have a problem to count a column "Errors". I want to count all Errors.   2. The second problem i have, i can't do the timechart and i need help with it. I want to do timechart with that all values, but when i do that, there is no columns on timechart. How to get that query?   Thanks in advance.
Hi, I wanted to know if there is anything in particular to be considered if one intends to connect a Splunk instance which is on premise to an Oracle database on Cloud using Db Connect? Objective is... See more...
Hi, I wanted to know if there is anything in particular to be considered if one intends to connect a Splunk instance which is on premise to an Oracle database on Cloud using Db Connect? Objective is to pull data from a table in the database. Thanks in advance.
Hi all, so I've been trying to ingest cisco netflow logs into my splunk environment, and finally got the logs in with Splunk Stream. However, there's a field "src_content" which seems to be unable t... See more...
Hi all, so I've been trying to ingest cisco netflow logs into my splunk environment, and finally got the logs in with Splunk Stream. However, there's a field "src_content" which seems to be unable to parse or read by splunk, and its appearing as symbols. I'm suspecting itt is due to cisco netflow sending them via High-Speed Logging. Is there a template for splunk to decode these? It looks like this for eg. src_content:  "��`��f^P,d�N�q������l��z�so#(���